Apr 21 10:24:51.936754 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:24:51.939656 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:51.939683 kernel: BIOS-provided physical RAM map: Apr 21 10:24:51.939695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:24:51.939705 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 21 10:24:51.939714 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 21 10:24:51.939726 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 21 10:24:51.939738 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 21 10:24:51.939750 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 21 10:24:51.939766 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 21 10:24:51.939820 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 21 10:24:51.939834 kernel: NX (Execute Disable) protection: active Apr 21 10:24:51.939846 kernel: APIC: Static calls initialized Apr 21 10:24:51.939859 kernel: efi: EFI v2.7 by EDK II Apr 21 10:24:51.939874 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 21 10:24:51.939892 kernel: SMBIOS 2.7 present. Apr 21 10:24:51.939906 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 21 10:24:51.939920 kernel: Hypervisor detected: KVM Apr 21 10:24:51.939933 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:24:51.939947 kernel: kvm-clock: using sched offset of 3587946729 cycles Apr 21 10:24:51.939962 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:24:51.939977 kernel: tsc: Detected 2499.996 MHz processor Apr 21 10:24:51.939989 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:24:51.940001 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:24:51.940014 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 21 10:24:51.940031 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:24:51.940044 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:24:51.940056 kernel: Using GB pages for direct mapping Apr 21 10:24:51.940070 kernel: Secure boot disabled Apr 21 10:24:51.940083 kernel: ACPI: Early table checksum verification disabled Apr 21 10:24:51.940096 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 21 10:24:51.940109 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:24:51.940122 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:24:51.940136 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:24:51.940153 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 21 10:24:51.940165 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 21 10:24:51.940179 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:24:51.940192 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:24:51.940205 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 21 10:24:51.940219 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 21 10:24:51.940240 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:24:51.940259 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:24:51.940274 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 21 10:24:51.940290 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 21 10:24:51.940306 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 21 10:24:51.940322 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 21 10:24:51.940337 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 21 10:24:51.940353 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 21 10:24:51.940372 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 21 10:24:51.940388 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 21 10:24:51.940404 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 21 10:24:51.940420 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 21 10:24:51.940435 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 21 10:24:51.940451 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 21 10:24:51.940467 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 10:24:51.940482 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 10:24:51.940498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 21 10:24:51.940517 kernel: NUMA: Initialized distance table, cnt=1 Apr 21 10:24:51.940533 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 21 10:24:51.940549 kernel: Zone ranges: Apr 21 10:24:51.940565 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:24:51.940581 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 21 10:24:51.940596 kernel: Normal empty Apr 21 10:24:51.940611 kernel: Movable zone start for each node Apr 21 10:24:51.940625 kernel: Early memory node ranges Apr 21 10:24:51.940639 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:24:51.940657 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 21 10:24:51.940670 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 21 10:24:51.940684 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 21 10:24:51.940698 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:24:51.940712 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:24:51.940726 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:24:51.940740 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 21 10:24:51.940753 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 21 10:24:51.940767 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:24:51.941823 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 21 10:24:51.941844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:24:51.941858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:24:51.941873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:24:51.941888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:24:51.941902 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:24:51.941917 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:24:51.941932 kernel: TSC deadline timer available Apr 21 10:24:51.941946 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:24:51.941959 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:24:51.941979 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 21 10:24:51.941993 kernel: Booting paravirtualized kernel on KVM Apr 21 10:24:51.942006 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:24:51.942020 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:24:51.942033 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:24:51.942058 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:24:51.942070 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:24:51.942082 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:24:51.942096 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:24:51.942117 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:51.942134 kernel: random: crng init done Apr 21 10:24:51.942146 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:24:51.942159 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 10:24:51.942173 kernel: Fallback order for Node 0: 0 Apr 21 10:24:51.942186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 21 10:24:51.942202 kernel: Policy zone: DMA32 Apr 21 10:24:51.942218 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:24:51.942238 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 21 10:24:51.942254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:24:51.942269 kernel: Kernel/User page tables isolation: enabled Apr 21 10:24:51.942285 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:24:51.942300 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:24:51.942316 kernel: Dynamic Preempt: voluntary Apr 21 10:24:51.942331 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:24:51.942348 kernel: rcu: RCU event tracing is enabled. Apr 21 10:24:51.942363 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:24:51.942382 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:24:51.942398 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:24:51.942413 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:24:51.942429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:24:51.942444 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:24:51.942459 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:24:51.942475 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:24:51.942506 kernel: Console: colour dummy device 80x25 Apr 21 10:24:51.942522 kernel: printk: console [tty0] enabled Apr 21 10:24:51.942538 kernel: printk: console [ttyS0] enabled Apr 21 10:24:51.942554 kernel: ACPI: Core revision 20230628 Apr 21 10:24:51.942571 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 21 10:24:51.942591 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:24:51.942607 kernel: x2apic enabled Apr 21 10:24:51.942623 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:24:51.942641 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:24:51.942657 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 21 10:24:51.942677 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 10:24:51.942693 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 10:24:51.942708 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:24:51.942724 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:24:51.942740 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:24:51.942756 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:24:51.942773 kernel: RETBleed: Vulnerable Apr 21 10:24:51.943578 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:24:51.943594 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:24:51.943609 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:24:51.943629 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:24:51.943644 kernel: active return thunk: its_return_thunk Apr 21 10:24:51.943659 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:24:51.943673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:24:51.943689 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:24:51.943704 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:24:51.943719 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 21 10:24:51.943734 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 21 10:24:51.943748 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:24:51.943763 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:24:51.943791 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:24:51.943810 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:24:51.943825 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:24:51.943840 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 21 10:24:51.943854 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 21 10:24:51.943869 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 21 10:24:51.943885 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 21 10:24:51.943900 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 21 10:24:51.943915 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 21 10:24:51.943930 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 21 10:24:51.943945 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:24:51.943960 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:24:51.943978 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:24:51.943994 kernel: landlock: Up and running. Apr 21 10:24:51.944009 kernel: SELinux: Initializing. Apr 21 10:24:51.944024 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:24:51.944040 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:24:51.944056 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 10:24:51.944072 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:24:51.944088 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:24:51.944104 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:24:51.944120 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 10:24:51.944137 kernel: signal: max sigframe size: 3632 Apr 21 10:24:51.944151 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:24:51.944167 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:24:51.944180 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:24:51.944195 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:24:51.944211 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:24:51.944226 kernel: .... node #0, CPUs: #1 Apr 21 10:24:51.944242 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 21 10:24:51.944259 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 10:24:51.944277 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:24:51.944293 kernel: smpboot: Max logical packages: 1 Apr 21 10:24:51.944309 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 21 10:24:51.944323 kernel: devtmpfs: initialized Apr 21 10:24:51.944339 kernel: x86/mm: Memory block size: 128MB Apr 21 10:24:51.944354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 21 10:24:51.944369 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:24:51.944383 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:24:51.944398 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:24:51.944417 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:24:51.944432 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:24:51.944447 kernel: audit: type=2000 audit(1776767092.082:1): state=initialized audit_enabled=0 res=1 Apr 21 10:24:51.944461 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:24:51.944475 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:24:51.944489 kernel: cpuidle: using governor menu Apr 21 10:24:51.944505 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:24:51.944519 kernel: dca service started, version 1.12.1 Apr 21 10:24:51.944535 kernel: PCI: Using configuration type 1 for base access Apr 21 10:24:51.944553 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:24:51.944569 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:24:51.944585 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:24:51.944601 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:24:51.944616 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:24:51.944631 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:24:51.944647 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:24:51.944663 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:24:51.944678 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 21 10:24:51.944697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:24:51.944713 kernel: ACPI: Interpreter enabled Apr 21 10:24:51.944728 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:24:51.944744 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:24:51.944761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:24:51.944933 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:24:51.944954 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 21 10:24:51.944971 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:24:51.945198 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:24:51.946836 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 21 10:24:51.947036 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 21 10:24:51.947058 kernel: acpiphp: Slot [3] registered Apr 21 10:24:51.947076 kernel: acpiphp: Slot [4] registered Apr 21 10:24:51.947093 kernel: acpiphp: Slot [5] registered Apr 21 10:24:51.947110 kernel: acpiphp: Slot [6] registered Apr 21 10:24:51.947126 kernel: acpiphp: Slot [7] registered Apr 21 10:24:51.947148 kernel: acpiphp: Slot [8] registered Apr 21 10:24:51.947164 kernel: acpiphp: Slot [9] registered Apr 21 10:24:51.947180 kernel: acpiphp: Slot [10] registered Apr 21 10:24:51.947196 kernel: acpiphp: Slot [11] registered Apr 21 10:24:51.947213 kernel: acpiphp: Slot [12] registered Apr 21 10:24:51.947229 kernel: acpiphp: Slot [13] registered Apr 21 10:24:51.947246 kernel: acpiphp: Slot [14] registered Apr 21 10:24:51.947262 kernel: acpiphp: Slot [15] registered Apr 21 10:24:51.947278 kernel: acpiphp: Slot [16] registered Apr 21 10:24:51.947295 kernel: acpiphp: Slot [17] registered Apr 21 10:24:51.947314 kernel: acpiphp: Slot [18] registered Apr 21 10:24:51.947330 kernel: acpiphp: Slot [19] registered Apr 21 10:24:51.947346 kernel: acpiphp: Slot [20] registered Apr 21 10:24:51.947363 kernel: acpiphp: Slot [21] registered Apr 21 10:24:51.947379 kernel: acpiphp: Slot [22] registered Apr 21 10:24:51.947395 kernel: acpiphp: Slot [23] registered Apr 21 10:24:51.947412 kernel: acpiphp: Slot [24] registered Apr 21 10:24:51.947428 kernel: acpiphp: Slot [25] registered Apr 21 10:24:51.947444 kernel: acpiphp: Slot [26] registered Apr 21 10:24:51.947463 kernel: acpiphp: Slot [27] registered Apr 21 10:24:51.947480 kernel: acpiphp: Slot [28] registered Apr 21 10:24:51.947496 kernel: acpiphp: Slot [29] registered Apr 21 10:24:51.947512 kernel: acpiphp: Slot [30] registered Apr 21 10:24:51.947528 kernel: acpiphp: Slot [31] registered Apr 21 10:24:51.947544 kernel: PCI host bridge to bus 0000:00 Apr 21 10:24:51.947699 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:24:51.948916 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:24:51.949068 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:24:51.949192 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 21 10:24:51.949317 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 21 10:24:51.949440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:24:51.949604 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 21 10:24:51.949755 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 21 10:24:51.953826 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 21 10:24:51.954004 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 21 10:24:51.954145 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 21 10:24:51.954286 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 21 10:24:51.954425 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 21 10:24:51.954564 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 21 10:24:51.954704 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 21 10:24:51.954860 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 21 10:24:51.955006 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 21 10:24:51.955143 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 21 10:24:51.955275 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:24:51.955407 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 21 10:24:51.955540 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:24:51.955681 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:24:51.956951 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 21 10:24:51.957130 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:24:51.957277 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 21 10:24:51.957299 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:24:51.957315 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:24:51.957333 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:24:51.957350 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:24:51.957366 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 21 10:24:51.957389 kernel: iommu: Default domain type: Translated Apr 21 10:24:51.957406 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:24:51.957421 kernel: efivars: Registered efivars operations Apr 21 10:24:51.957438 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:24:51.957455 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:24:51.957471 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 21 10:24:51.957488 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 21 10:24:51.957633 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 21 10:24:51.957787 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 21 10:24:51.957947 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:24:51.957968 kernel: vgaarb: loaded Apr 21 10:24:51.957985 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 21 10:24:51.958001 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 21 10:24:51.958018 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:24:51.958034 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:24:51.958051 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:24:51.958067 kernel: pnp: PnP ACPI init Apr 21 10:24:51.958087 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:24:51.958104 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:24:51.958120 kernel: NET: Registered PF_INET protocol family Apr 21 10:24:51.958136 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:24:51.958153 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 21 10:24:51.958169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:24:51.958186 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 10:24:51.958203 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 21 10:24:51.958219 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 21 10:24:51.958239 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:24:51.958256 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:24:51.958271 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:24:51.958288 kernel: NET: Registered PF_XDP protocol family Apr 21 10:24:51.958424 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:24:51.958552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:24:51.958677 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:24:51.960908 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 21 10:24:51.961066 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 21 10:24:51.961226 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 21 10:24:51.961248 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:24:51.961265 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:24:51.961280 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:24:51.961294 kernel: clocksource: Switched to clocksource tsc Apr 21 10:24:51.961310 kernel: Initialise system trusted keyrings Apr 21 10:24:51.961324 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 21 10:24:51.961341 kernel: Key type asymmetric registered Apr 21 10:24:51.961361 kernel: Asymmetric key parser 'x509' registered Apr 21 10:24:51.961376 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:24:51.961392 kernel: io scheduler mq-deadline registered Apr 21 10:24:51.961408 kernel: io scheduler kyber registered Apr 21 10:24:51.961422 kernel: io scheduler bfq registered Apr 21 10:24:51.961438 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:24:51.961454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:24:51.961471 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:24:51.961488 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:24:51.961508 kernel: i8042: Warning: Keylock active Apr 21 10:24:51.961525 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:24:51.961542 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:24:51.961714 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 21 10:24:51.961869 kernel: rtc_cmos 00:00: registered as rtc0 Apr 21 10:24:51.961998 kernel: rtc_cmos 00:00: setting system clock to 2026-04-21T10:24:51 UTC (1776767091) Apr 21 10:24:51.962127 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 21 10:24:51.962148 kernel: intel_pstate: CPU model not supported Apr 21 10:24:51.962170 kernel: efifb: probing for efifb Apr 21 10:24:51.962187 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 21 10:24:51.962203 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 21 10:24:51.962220 kernel: efifb: scrolling: redraw Apr 21 10:24:51.962237 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:24:51.962255 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:24:51.962271 kernel: fb0: EFI VGA frame buffer device Apr 21 10:24:51.962288 kernel: pstore: Using crash dump compression: deflate Apr 21 10:24:51.962305 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:24:51.962325 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:24:51.962342 kernel: Segment Routing with IPv6 Apr 21 10:24:51.962359 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:24:51.962375 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:24:51.962392 kernel: Key type dns_resolver registered Apr 21 10:24:51.962409 kernel: IPI shorthand broadcast: enabled Apr 21 10:24:51.962452 kernel: sched_clock: Marking stable (491001601, 133473108)->(694228941, -69754232) Apr 21 10:24:51.962473 kernel: registered taskstats version 1 Apr 21 10:24:51.962490 kernel: Loading compiled-in X.509 certificates Apr 21 10:24:51.962511 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:24:51.962529 kernel: Key type .fscrypt registered Apr 21 10:24:51.962546 kernel: Key type fscrypt-provisioning registered Apr 21 10:24:51.962563 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:24:51.962581 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:24:51.962598 kernel: ima: No architecture policies found Apr 21 10:24:51.962616 kernel: clk: Disabling unused clocks Apr 21 10:24:51.962633 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:24:51.962651 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:24:51.962672 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:24:51.962690 kernel: Run /init as init process Apr 21 10:24:51.962708 kernel: with arguments: Apr 21 10:24:51.962725 kernel: /init Apr 21 10:24:51.962742 kernel: with environment: Apr 21 10:24:51.962763 kernel: HOME=/ Apr 21 10:24:51.964447 kernel: TERM=linux Apr 21 10:24:51.964474 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:24:51.964503 systemd[1]: Detected virtualization amazon. Apr 21 10:24:51.964522 systemd[1]: Detected architecture x86-64. Apr 21 10:24:51.964539 systemd[1]: Running in initrd. Apr 21 10:24:51.964557 systemd[1]: No hostname configured, using default hostname. Apr 21 10:24:51.964575 systemd[1]: Hostname set to . Apr 21 10:24:51.964592 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:24:51.964606 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:24:51.964621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:51.964649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:51.964674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:24:51.964697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:24:51.964721 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:24:51.964750 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:24:51.964815 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:24:51.964837 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:24:51.964853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:51.964871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:24:51.964888 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:24:51.964906 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:24:51.964923 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:24:51.964945 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:24:51.964962 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:24:51.964980 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:24:51.964998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:24:51.965015 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:24:51.965033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:51.965050 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:51.965067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:51.965085 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:24:51.965105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:24:51.965123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:24:51.965140 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:24:51.965157 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:24:51.965174 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:24:51.965192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:24:51.965209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:51.965226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:24:51.965284 systemd-journald[179]: Collecting audit messages is disabled. Apr 21 10:24:51.965324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:51.965342 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:24:51.965364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:24:51.965382 systemd-journald[179]: Journal started Apr 21 10:24:51.965417 systemd-journald[179]: Runtime Journal (/run/log/journal/ec271fe4c227f3b50698a09c93e45d98) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:24:51.937820 systemd-modules-load[180]: Inserted module 'overlay' Apr 21 10:24:51.969805 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:24:51.976232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:51.989795 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:24:51.992860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:51.995134 kernel: Bridge firewalling registered Apr 21 10:24:51.993769 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 21 10:24:51.996469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:24:51.997576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:52.000743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:52.010815 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:24:52.012059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:24:52.019340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:24:52.039178 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:52.040303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:24:52.043210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:52.050033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:24:52.053994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:24:52.056173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:52.070423 dracut-cmdline[213]: dracut-dracut-053 Apr 21 10:24:52.074201 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:52.108537 systemd-resolved[214]: Positive Trust Anchors: Apr 21 10:24:52.108554 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:24:52.108615 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:24:52.118035 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 21 10:24:52.121108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:24:52.121859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:24:52.163811 kernel: SCSI subsystem initialized Apr 21 10:24:52.173813 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:24:52.184883 kernel: iscsi: registered transport (tcp) Apr 21 10:24:52.206935 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:24:52.207021 kernel: QLogic iSCSI HBA Driver Apr 21 10:24:52.246221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:24:52.256051 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:24:52.282462 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:24:52.282542 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:24:52.282565 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:24:52.325832 kernel: raid6: avx512x4 gen() 17986 MB/s Apr 21 10:24:52.343813 kernel: raid6: avx512x2 gen() 17893 MB/s Apr 21 10:24:52.361812 kernel: raid6: avx512x1 gen() 17994 MB/s Apr 21 10:24:52.379810 kernel: raid6: avx2x4 gen() 17770 MB/s Apr 21 10:24:52.397814 kernel: raid6: avx2x2 gen() 17739 MB/s Apr 21 10:24:52.416052 kernel: raid6: avx2x1 gen() 13864 MB/s Apr 21 10:24:52.416113 kernel: raid6: using algorithm avx512x1 gen() 17994 MB/s Apr 21 10:24:52.435115 kernel: raid6: .... xor() 21432 MB/s, rmw enabled Apr 21 10:24:52.435187 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:24:52.456895 kernel: xor: automatically using best checksumming function avx Apr 21 10:24:52.618813 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:24:52.629115 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:24:52.633999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:24:52.655249 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 21 10:24:52.660447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:24:52.670626 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:24:52.688309 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 21 10:24:52.720709 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:24:52.725079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:24:52.788981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:52.795969 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:24:52.825220 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:24:52.827454 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:24:52.828757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:52.829380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:24:52.838340 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:24:52.864088 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:24:52.897231 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:24:52.914334 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:24:52.914633 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:24:52.915595 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:24:52.918362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:52.920729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:52.921259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:52.921564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:52.922457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:52.933267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:52.943635 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 21 10:24:52.943492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:52.943882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:52.950160 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:24:52.956811 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:0e:28:4f:0d:31 Apr 21 10:24:52.957114 kernel: AES CTR mode by8 optimization enabled Apr 21 10:24:52.955083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:52.964237 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:24:52.977873 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:24:52.982242 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 21 10:24:52.991928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:52.998072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:53.004365 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:24:53.011229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:24:53.011308 kernel: GPT:9289727 != 33554431 Apr 21 10:24:53.011329 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:24:53.012559 kernel: GPT:9289727 != 33554431 Apr 21 10:24:53.013805 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:24:53.014935 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:24:53.035223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:53.074596 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (450) Apr 21 10:24:53.082804 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (460) Apr 21 10:24:53.156725 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:24:53.168017 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:24:53.183745 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:24:53.184321 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:24:53.191993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:24:53.200082 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:24:53.207318 disk-uuid[631]: Primary Header is updated. Apr 21 10:24:53.207318 disk-uuid[631]: Secondary Entries is updated. Apr 21 10:24:53.207318 disk-uuid[631]: Secondary Header is updated. Apr 21 10:24:53.214802 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:24:53.222808 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:24:53.229803 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:24:54.231800 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:24:54.232976 disk-uuid[632]: The operation has completed successfully. Apr 21 10:24:54.375721 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:24:54.375927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:24:54.401116 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:24:54.405130 sh[975]: Success Apr 21 10:24:54.420992 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 10:24:54.519688 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:24:54.529043 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:24:54.530676 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:24:54.575961 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:24:54.576037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:54.576060 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:24:54.579076 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:24:54.582608 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:24:54.641811 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:24:54.645768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:24:54.647025 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:24:54.651978 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:24:54.654959 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:24:54.683294 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:54.683365 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:54.687521 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:24:54.694812 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:24:54.707702 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:24:54.711494 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:54.718253 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:24:54.726066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:24:54.780694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:24:54.788015 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:24:54.829308 systemd-networkd[1168]: lo: Link UP Apr 21 10:24:54.834043 systemd-networkd[1168]: lo: Gained carrier Apr 21 10:24:54.835942 systemd-networkd[1168]: Enumeration completed Apr 21 10:24:54.836412 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:54.836418 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:24:54.837354 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:24:54.838111 systemd[1]: Reached target network.target - Network. Apr 21 10:24:54.844828 systemd-networkd[1168]: eth0: Link UP Apr 21 10:24:54.844835 systemd-networkd[1168]: eth0: Gained carrier Apr 21 10:24:54.844851 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:54.873880 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.16.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:24:54.894819 ignition[1114]: Ignition 2.19.0 Apr 21 10:24:54.894834 ignition[1114]: Stage: fetch-offline Apr 21 10:24:54.895100 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:54.895114 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:54.897393 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:24:54.895543 ignition[1114]: Ignition finished successfully Apr 21 10:24:54.902017 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:24:54.918860 ignition[1177]: Ignition 2.19.0 Apr 21 10:24:54.918874 ignition[1177]: Stage: fetch Apr 21 10:24:54.919374 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:54.919387 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:54.919513 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:54.974425 ignition[1177]: PUT result: OK Apr 21 10:24:54.985652 ignition[1177]: parsed url from cmdline: "" Apr 21 10:24:54.985664 ignition[1177]: no config URL provided Apr 21 10:24:54.985677 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:24:54.985693 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:24:54.985722 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:54.988876 ignition[1177]: PUT result: OK Apr 21 10:24:54.988938 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:24:54.992678 ignition[1177]: GET result: OK Apr 21 10:24:54.992970 ignition[1177]: parsing config with SHA512: 7227da7227d5abd4f69bf487cc5659a2addc943426524fd1b37b35b31d5955a8dd6f417e23ae7a913789e6105a24949447ceb737810106986ed3ec30bb9b9413 Apr 21 10:24:54.997164 unknown[1177]: fetched base config from "system" Apr 21 10:24:54.997178 unknown[1177]: fetched base config from "system" Apr 21 10:24:54.997187 unknown[1177]: fetched user config from "aws" Apr 21 10:24:54.998949 ignition[1177]: fetch: fetch complete Apr 21 10:24:54.998959 ignition[1177]: fetch: fetch passed Apr 21 10:24:54.999039 ignition[1177]: Ignition finished successfully Apr 21 10:24:55.002427 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:24:55.007029 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:24:55.024638 ignition[1183]: Ignition 2.19.0 Apr 21 10:24:55.024655 ignition[1183]: Stage: kargs Apr 21 10:24:55.025989 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:55.026005 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:55.026122 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:55.029058 ignition[1183]: PUT result: OK Apr 21 10:24:55.035703 ignition[1183]: kargs: kargs passed Apr 21 10:24:55.035768 ignition[1183]: Ignition finished successfully Apr 21 10:24:55.037913 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:24:55.043014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:24:55.058352 ignition[1189]: Ignition 2.19.0 Apr 21 10:24:55.058365 ignition[1189]: Stage: disks Apr 21 10:24:55.058821 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:55.058837 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:55.058960 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:55.060091 ignition[1189]: PUT result: OK Apr 21 10:24:55.066951 ignition[1189]: disks: disks passed Apr 21 10:24:55.067026 ignition[1189]: Ignition finished successfully Apr 21 10:24:55.068605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:24:55.069719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:24:55.070434 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:24:55.070840 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:24:55.071397 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:24:55.071982 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:24:55.077983 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:24:55.103286 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:24:55.107472 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:24:55.114011 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:24:55.216853 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:24:55.216472 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:24:55.217571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:24:55.224924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:24:55.227820 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:24:55.229637 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:24:55.230750 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:24:55.230802 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:24:55.242025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:24:55.247963 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:24:55.249119 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 21 10:24:55.254806 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:55.254868 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:55.256660 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:24:55.268931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:24:55.270733 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:24:55.386730 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:24:55.392162 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:24:55.397733 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:24:55.402465 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:24:55.586028 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:24:55.590906 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:24:55.595232 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:24:55.603596 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:24:55.606428 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:55.640948 ignition[1334]: INFO : Ignition 2.19.0 Apr 21 10:24:55.640948 ignition[1334]: INFO : Stage: mount Apr 21 10:24:55.642503 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:55.642503 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:55.642503 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:55.647555 ignition[1334]: INFO : PUT result: OK Apr 21 10:24:55.651599 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:24:55.652193 ignition[1334]: INFO : mount: mount passed Apr 21 10:24:55.652193 ignition[1334]: INFO : Ignition finished successfully Apr 21 10:24:55.654384 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:24:55.661052 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:24:55.668304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:24:55.692931 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1346) Apr 21 10:24:55.695981 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:55.696059 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:55.698596 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:24:55.703803 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:24:55.706390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:24:55.728194 ignition[1363]: INFO : Ignition 2.19.0 Apr 21 10:24:55.728194 ignition[1363]: INFO : Stage: files Apr 21 10:24:55.729718 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:55.729718 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:55.729718 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:55.732890 ignition[1363]: INFO : PUT result: OK Apr 21 10:24:55.738119 ignition[1363]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:24:55.739440 ignition[1363]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:24:55.739440 ignition[1363]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:24:55.763650 ignition[1363]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:24:55.765138 ignition[1363]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:24:55.765138 ignition[1363]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:24:55.764253 unknown[1363]: wrote ssh authorized keys file for user: core Apr 21 10:24:55.768049 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:24:55.768049 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:24:55.842798 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:24:55.999573 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:24:55.999573 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:56.002389 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:24:56.021951 systemd-networkd[1168]: eth0: Gained IPv6LL Apr 21 10:24:56.331182 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:24:56.827991 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:56.827991 ignition[1363]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:24:56.830876 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:24:56.837973 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:24:56.837973 ignition[1363]: INFO : files: files passed Apr 21 10:24:56.837973 ignition[1363]: INFO : Ignition finished successfully Apr 21 10:24:56.833469 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:24:56.842136 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:24:56.845373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:24:56.848377 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:24:56.848499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:24:56.863564 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:56.863564 initrd-setup-root-after-ignition[1392]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:56.868011 initrd-setup-root-after-ignition[1396]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:56.868484 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:24:56.869746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:24:56.875060 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:24:56.906554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:24:56.906670 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:24:56.907495 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:24:56.908223 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:24:56.909515 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:24:56.916016 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:24:56.929730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:24:56.937976 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:24:56.950607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:24:56.951354 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:56.952328 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:24:56.953277 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:24:56.953509 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:24:56.954594 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:24:56.955468 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:24:56.956199 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:24:56.957001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:24:56.957793 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:24:56.958557 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:24:56.959315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:24:56.960118 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:24:56.961362 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:24:56.962124 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:24:56.962856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:24:56.963040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:24:56.964133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:24:56.965062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:56.965704 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:24:56.965862 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:56.966513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:24:56.966683 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:24:56.967770 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:24:56.967967 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:24:56.968591 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:24:56.968738 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:24:56.976063 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:24:56.981199 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:24:56.982578 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:24:56.983611 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:56.985049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:24:56.985221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:24:56.996035 ignition[1417]: INFO : Ignition 2.19.0 Apr 21 10:24:56.996035 ignition[1417]: INFO : Stage: umount Apr 21 10:24:56.998467 ignition[1417]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:56.998467 ignition[1417]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:24:56.998467 ignition[1417]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:24:56.998467 ignition[1417]: INFO : PUT result: OK Apr 21 10:24:56.996869 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:24:56.997105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:24:57.003535 ignition[1417]: INFO : umount: umount passed Apr 21 10:24:57.004114 ignition[1417]: INFO : Ignition finished successfully Apr 21 10:24:57.005417 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:24:57.005525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:24:57.006364 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:24:57.006442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:24:57.008510 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:24:57.008587 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:24:57.009282 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:24:57.009348 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:24:57.009924 systemd[1]: Stopped target network.target - Network. Apr 21 10:24:57.010377 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:24:57.010439 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:24:57.012903 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:24:57.013507 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:24:57.017008 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:57.017589 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:24:57.018036 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:24:57.018507 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:24:57.020886 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:24:57.021389 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:24:57.021443 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:24:57.021905 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:24:57.021968 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:24:57.022431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:24:57.022488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:24:57.023154 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:24:57.024135 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:24:57.027761 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:24:57.028961 systemd-networkd[1168]: eth0: DHCPv6 lease lost Apr 21 10:24:57.031538 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:24:57.031980 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:24:57.033116 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:24:57.033248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:24:57.037883 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:24:57.038024 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:24:57.039822 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:24:57.039885 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:57.040616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:24:57.040679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:24:57.045906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:24:57.046842 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:24:57.046918 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:24:57.049341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:24:57.049408 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:57.050298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:24:57.050355 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:57.050949 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:24:57.051004 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:24:57.051720 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:24:57.065318 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:24:57.065464 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:24:57.067236 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:24:57.067432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:24:57.069353 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:24:57.069446 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:57.070378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:24:57.070429 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:57.071106 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:24:57.071167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:24:57.072268 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:24:57.072329 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:24:57.073473 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:24:57.073533 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:57.082012 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:24:57.082535 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:24:57.082645 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:57.084916 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:24:57.084989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:57.086519 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:24:57.086583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:57.087200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:57.087263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:57.091954 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:24:57.092068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:24:57.092953 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:24:57.100024 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:24:57.108333 systemd[1]: Switching root. Apr 21 10:24:57.139583 systemd-journald[179]: Journal stopped Apr 21 10:24:58.641301 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 21 10:24:58.641390 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:24:58.641413 kernel: SELinux: policy capability open_perms=1 Apr 21 10:24:58.641434 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:24:58.641452 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:24:58.641475 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:24:58.641498 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:24:58.641516 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:24:58.641534 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:24:58.641558 kernel: audit: type=1403 audit(1776767097.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:24:58.641584 systemd[1]: Successfully loaded SELinux policy in 42.113ms. Apr 21 10:24:58.641607 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.640ms. Apr 21 10:24:58.641631 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:24:58.641651 systemd[1]: Detected virtualization amazon. Apr 21 10:24:58.641670 systemd[1]: Detected architecture x86-64. Apr 21 10:24:58.641689 systemd[1]: Detected first boot. Apr 21 10:24:58.641709 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:24:58.641727 zram_generator::config[1459]: No configuration found. Apr 21 10:24:58.641752 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:24:58.641771 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:24:58.641804 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:24:58.641824 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:24:58.641844 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:24:58.641863 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:24:58.641883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:24:58.641903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:24:58.641922 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:24:58.641942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:24:58.641964 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:24:58.641989 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:24:58.642009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:58.642034 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:58.642055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:24:58.642078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:24:58.642099 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:24:58.642117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:24:58.642137 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:24:58.642160 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:58.642179 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:24:58.642207 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:24:58.642229 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:24:58.642248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:24:58.642267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:58.642286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:24:58.642304 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:24:58.642328 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:24:58.642347 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:24:58.642367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:24:58.642387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:58.642407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:58.642425 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:58.642444 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:24:58.642464 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:24:58.642483 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:24:58.642505 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:24:58.642523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:58.642542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:24:58.642561 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:24:58.642581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:24:58.642601 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:24:58.642620 systemd[1]: Reached target machines.target - Containers. Apr 21 10:24:58.642639 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:24:58.642658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:24:58.642681 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:24:58.642702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:24:58.642722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:24:58.642742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:24:58.642762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:24:58.642805 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:24:58.642825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:24:58.642847 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:24:58.642871 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:24:58.642891 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:24:58.642911 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:24:58.642933 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:24:58.642952 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:24:58.642972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:24:58.642992 kernel: fuse: init (API version 7.39) Apr 21 10:24:58.643014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:24:58.643034 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:24:58.643057 kernel: loop: module loaded Apr 21 10:24:58.643077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:24:58.643098 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:24:58.643118 systemd[1]: Stopped verity-setup.service. Apr 21 10:24:58.643170 systemd-journald[1541]: Collecting audit messages is disabled. Apr 21 10:24:58.643211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:58.643233 systemd-journald[1541]: Journal started Apr 21 10:24:58.643276 systemd-journald[1541]: Runtime Journal (/run/log/journal/ec271fe4c227f3b50698a09c93e45d98) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:24:58.297236 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:24:58.316563 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:24:58.317193 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:24:58.651870 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:24:58.655222 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:24:58.660050 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:24:58.661840 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:24:58.663025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:24:58.664157 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:24:58.666625 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:24:58.667628 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:58.668653 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:24:58.668930 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:24:58.670330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:24:58.670541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:24:58.672367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:24:58.672552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:24:58.673684 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:24:58.674825 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:24:58.675696 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:24:58.676067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:24:58.678058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:58.679078 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:24:58.681216 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:24:58.709187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:24:58.716734 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:24:58.736825 kernel: ACPI: bus type drm_connector registered Apr 21 10:24:58.731896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:24:58.742927 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:24:58.743604 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:24:58.743655 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:24:58.748800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:24:58.761468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:24:58.773140 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:24:58.774289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:24:58.776690 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:24:58.784119 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:24:58.785113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:24:58.790628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:24:58.791314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:24:58.797003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:24:58.808952 systemd-journald[1541]: Time spent on flushing to /var/log/journal/ec271fe4c227f3b50698a09c93e45d98 is 63.636ms for 977 entries. Apr 21 10:24:58.808952 systemd-journald[1541]: System Journal (/var/log/journal/ec271fe4c227f3b50698a09c93e45d98) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:24:58.893493 systemd-journald[1541]: Received client request to flush runtime journal. Apr 21 10:24:58.893554 kernel: loop0: detected capacity change from 0 to 142488 Apr 21 10:24:58.810050 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:24:58.813926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:24:58.818387 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:24:58.820916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:24:58.825659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:58.827282 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:24:58.829134 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:24:58.831199 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:24:58.852255 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:24:58.856887 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:24:58.858500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:24:58.871072 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:24:58.897082 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:24:58.925878 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:24:58.963375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:58.971079 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Apr 21 10:24:58.971106 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Apr 21 10:24:58.979875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:24:58.980908 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:24:58.994813 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:59.004554 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:24:59.015224 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:24:59.041463 kernel: loop1: detected capacity change from 0 to 61336 Apr 21 10:24:59.096154 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:24:59.107076 kernel: loop2: detected capacity change from 0 to 140768 Apr 21 10:24:59.108715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:24:59.132546 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Apr 21 10:24:59.133023 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Apr 21 10:24:59.140569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:59.176346 kernel: loop3: detected capacity change from 0 to 217752 Apr 21 10:24:59.310119 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:24:59.350810 kernel: loop5: detected capacity change from 0 to 61336 Apr 21 10:24:59.386816 kernel: loop6: detected capacity change from 0 to 140768 Apr 21 10:24:59.424808 kernel: loop7: detected capacity change from 0 to 217752 Apr 21 10:24:59.459365 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:24:59.465743 (sd-merge)[1616]: Merged extensions into '/usr'. Apr 21 10:24:59.473008 systemd[1]: Reloading requested from client PID 1587 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:24:59.473031 systemd[1]: Reloading... Apr 21 10:24:59.625801 zram_generator::config[1642]: No configuration found. Apr 21 10:24:59.856725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:24:59.950471 systemd[1]: Reloading finished in 476 ms. Apr 21 10:25:00.017872 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:25:00.019049 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:25:00.039073 systemd[1]: Starting ensure-sysext.service... Apr 21 10:25:00.045527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:25:00.064082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:25:00.091577 systemd[1]: Reloading requested from client PID 1694 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:25:00.091603 systemd[1]: Reloading... Apr 21 10:25:00.132042 systemd-udevd[1696]: Using default interface naming scheme 'v255'. Apr 21 10:25:00.137152 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:25:00.137893 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:25:00.142873 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:25:00.146291 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Apr 21 10:25:00.146448 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Apr 21 10:25:00.152014 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:25:00.152031 systemd-tmpfiles[1695]: Skipping /boot Apr 21 10:25:00.184278 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:25:00.184295 systemd-tmpfiles[1695]: Skipping /boot Apr 21 10:25:00.191230 ldconfig[1582]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:25:00.270915 zram_generator::config[1739]: No configuration found. Apr 21 10:25:00.359073 (udev-worker)[1726]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:25:00.745329 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:25:00.800830 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:25:00.801107 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 21 10:25:00.861137 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:25:00.866326 kernel: ACPI: button: Sleep Button [SLPF] Apr 21 10:25:00.890837 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 21 10:25:01.131813 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1730) Apr 21 10:25:01.228249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:25:01.350661 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:25:01.351624 systemd[1]: Reloading finished in 1259 ms. Apr 21 10:25:01.382913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:25:01.385433 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:25:01.388414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:25:01.453414 systemd[1]: Finished ensure-sysext.service. Apr 21 10:25:01.461330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:25:01.471185 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:25:01.474796 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:25:01.476872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:25:01.477829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:25:01.479357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:25:01.488163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:25:01.493027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:25:01.503122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:25:01.504017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:25:01.514115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:25:01.525871 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:25:01.534794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:25:01.536106 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:25:01.540574 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:25:01.549013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:25:01.551886 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:25:01.552755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:25:01.559307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:25:01.582226 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:25:01.583317 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:25:01.583525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:25:01.611181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:25:01.611419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:25:01.622728 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:25:01.625000 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:25:01.625245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:25:01.642126 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:25:01.650941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:25:01.651605 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:25:01.651698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:25:01.653952 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:25:01.656850 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:25:01.668896 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:25:01.678957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:25:01.705855 lvm[1917]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:25:01.723174 augenrules[1926]: No rules Apr 21 10:25:01.726946 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:25:01.730835 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:25:01.736526 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:25:01.738602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:25:01.746886 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:25:01.750239 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:25:01.753553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:25:01.762026 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:25:01.777133 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:25:01.786141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:25:01.789351 lvm[1938]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:25:01.848351 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:25:01.898265 systemd-networkd[1896]: lo: Link UP Apr 21 10:25:01.898282 systemd-networkd[1896]: lo: Gained carrier Apr 21 10:25:01.900078 systemd-networkd[1896]: Enumeration completed Apr 21 10:25:01.900230 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:25:01.901370 systemd-networkd[1896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:25:01.901380 systemd-networkd[1896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:25:01.905222 systemd-networkd[1896]: eth0: Link UP Apr 21 10:25:01.905423 systemd-networkd[1896]: eth0: Gained carrier Apr 21 10:25:01.905469 systemd-networkd[1896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:25:01.912204 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:25:01.919896 systemd-networkd[1896]: eth0: DHCPv4 address 172.31.16.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:25:01.922382 systemd-resolved[1902]: Positive Trust Anchors: Apr 21 10:25:01.922771 systemd-resolved[1902]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:25:01.922932 systemd-resolved[1902]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:25:01.931680 systemd-resolved[1902]: Defaulting to hostname 'linux'. Apr 21 10:25:01.934606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:25:01.935486 systemd[1]: Reached target network.target - Network. Apr 21 10:25:01.936066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:25:01.937131 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:25:01.937923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:25:01.939171 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:25:01.940095 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:25:01.941139 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:25:01.941560 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:25:01.942283 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:25:01.942328 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:25:01.942753 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:25:01.945131 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:25:01.948497 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:25:01.957015 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:25:01.959426 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:25:01.960159 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:25:01.960620 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:25:01.961181 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:25:01.961222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:25:01.962973 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:25:01.967014 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:25:01.973131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:25:01.976862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:25:01.985498 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:25:01.986259 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:25:01.987985 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:25:01.993149 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:25:02.002095 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:25:02.013958 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:25:02.022827 jq[1955]: false Apr 21 10:25:02.024652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:25:02.045072 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:25:02.070020 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:25:02.071980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:25:02.074057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:25:02.081056 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:25:02.092399 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:25:02.102260 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:25:02.102550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:25:02.105082 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:25:02.105344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:25:02.127567 extend-filesystems[1956]: Found loop4 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found loop5 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found loop6 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found loop7 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p1 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p2 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p3 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found usr Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p4 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p6 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p7 Apr 21 10:25:02.135955 extend-filesystems[1956]: Found nvme0n1p9 Apr 21 10:25:02.135955 extend-filesystems[1956]: Checking size of /dev/nvme0n1p9 Apr 21 10:25:02.203898 jq[1972]: true Apr 21 10:25:02.204059 update_engine[1969]: I20260421 10:25:02.203740 1969 main.cc:92] Flatcar Update Engine starting Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: ---------------------------------------------------- Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: corporation. Support and training for ntp-4 are Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: available at https://www.nwtime.org/support Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: ---------------------------------------------------- Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: proto: precision = 0.063 usec (-24) Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: basedate set to 2026-04-09 Apr 21 10:25:02.204387 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: gps base set to 2026-04-12 (week 2414) Apr 21 10:25:02.186176 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:25:02.179323 dbus-daemon[1954]: [system] SELinux support is enabled Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listen normally on 3 eth0 172.31.16.209:123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listen normally on 4 lo [::1]:123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: bind(21) AF_INET6 fe80::40e:28ff:fe4f:d31%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: unable to create socket on eth0 (5) for fe80::40e:28ff:fe4f:d31%2#123 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: failed to init interface for address fe80::40e:28ff:fe4f:d31%2 Apr 21 10:25:02.220005 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: Listening on routing socket on fd #21 for interface updates Apr 21 10:25:02.220384 update_engine[1969]: I20260421 10:25:02.205513 1969 update_check_scheduler.cc:74] Next update check in 10m40s Apr 21 10:25:02.203607 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:25:02.184185 ntpd[1958]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:25:02.276029 jq[1986]: true Apr 21 10:25:02.204900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:25:02.184210 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:25:02.280447 extend-filesystems[1956]: Resized partition /dev/nvme0n1p9 Apr 21 10:25:02.289971 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:25:02.289971 ntpd[1958]: 21 Apr 10:25:02 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:25:02.208365 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:25:02.184221 ntpd[1958]: ---------------------------------------------------- Apr 21 10:25:02.290460 extend-filesystems[2003]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:25:02.303947 tar[1982]: linux-amd64/LICENSE Apr 21 10:25:02.303947 tar[1982]: linux-amd64/helm Apr 21 10:25:02.208434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:25:02.184231 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:25:02.209168 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:25:02.184241 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:25:02.209198 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:25:02.184252 ntpd[1958]: corporation. Support and training for ntp-4 are Apr 21 10:25:02.218010 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:25:02.324227 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:25:02.184262 ntpd[1958]: available at https://www.nwtime.org/support Apr 21 10:25:02.224256 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:25:02.184272 ntpd[1958]: ---------------------------------------------------- Apr 21 10:25:02.250208 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:25:02.187032 ntpd[1958]: proto: precision = 0.063 usec (-24) Apr 21 10:25:02.258110 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:25:02.192676 dbus-daemon[1954]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1896 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:25:02.316155 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:25:02.194874 ntpd[1958]: basedate set to 2026-04-09 Apr 21 10:25:02.194899 ntpd[1958]: gps base set to 2026-04-12 (week 2414) Apr 21 10:25:02.207456 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:25:02.207532 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:25:02.207752 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:25:02.215543 ntpd[1958]: Listen normally on 3 eth0 172.31.16.209:123 Apr 21 10:25:02.215615 ntpd[1958]: Listen normally on 4 lo [::1]:123 Apr 21 10:25:02.215676 ntpd[1958]: bind(21) AF_INET6 fe80::40e:28ff:fe4f:d31%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:25:02.215700 ntpd[1958]: unable to create socket on eth0 (5) for fe80::40e:28ff:fe4f:d31%2#123 Apr 21 10:25:02.215718 ntpd[1958]: failed to init interface for address fe80::40e:28ff:fe4f:d31%2 Apr 21 10:25:02.217565 ntpd[1958]: Listening on routing socket on fd #21 for interface updates Apr 21 10:25:02.225310 dbus-daemon[1954]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:25:02.270135 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:25:02.270175 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:25:02.413690 systemd-logind[1964]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:25:02.413726 systemd-logind[1964]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 21 10:25:02.413749 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:25:02.417418 systemd-logind[1964]: New seat seat0. Apr 21 10:25:02.428436 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:25:02.465471 coreos-metadata[1953]: Apr 21 10:25:02.461 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:25:02.465471 coreos-metadata[1953]: Apr 21 10:25:02.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:25:02.465471 coreos-metadata[1953]: Apr 21 10:25:02.465 INFO Fetch successful Apr 21 10:25:02.465471 coreos-metadata[1953]: Apr 21 10:25:02.465 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.467 INFO Fetch successful Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.467 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.468 INFO Fetch successful Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.468 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.469 INFO Fetch successful Apr 21 10:25:02.469677 coreos-metadata[1953]: Apr 21 10:25:02.469 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:25:02.472357 coreos-metadata[1953]: Apr 21 10:25:02.471 INFO Fetch failed with 404: resource not found Apr 21 10:25:02.472357 coreos-metadata[1953]: Apr 21 10:25:02.471 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.490 INFO Fetch successful Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.490 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.490 INFO Fetch successful Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.490 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.491 INFO Fetch successful Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.492 INFO Fetch successful Apr 21 10:25:02.494834 coreos-metadata[1953]: Apr 21 10:25:02.492 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:25:02.494574 locksmithd[2001]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:25:02.507305 coreos-metadata[1953]: Apr 21 10:25:02.497 INFO Fetch successful Apr 21 10:25:02.606612 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1733) Apr 21 10:25:02.606703 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:25:02.606844 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:25:02.594361 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:25:02.595651 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:25:02.598534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:25:02.608234 systemd[1]: Starting sshkeys.service... Apr 21 10:25:02.614432 extend-filesystems[2003]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:25:02.614432 extend-filesystems[2003]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:25:02.614432 extend-filesystems[2003]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:25:02.619573 extend-filesystems[1956]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:25:02.615245 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:25:02.615507 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:25:02.702565 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:25:02.710362 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:25:02.720368 dbus-daemon[1954]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:25:02.720562 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:25:02.726268 dbus-daemon[1954]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:25:02.737877 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:25:02.787460 polkitd[2075]: Started polkitd version 121 Apr 21 10:25:02.802645 coreos-metadata[2061]: Apr 21 10:25:02.801 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:25:02.803125 coreos-metadata[2061]: Apr 21 10:25:02.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:25:02.806995 coreos-metadata[2061]: Apr 21 10:25:02.803 INFO Fetch successful Apr 21 10:25:02.806995 coreos-metadata[2061]: Apr 21 10:25:02.803 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:25:02.806995 coreos-metadata[2061]: Apr 21 10:25:02.805 INFO Fetch successful Apr 21 10:25:02.808685 unknown[2061]: wrote ssh authorized keys file for user: core Apr 21 10:25:02.830412 polkitd[2075]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:25:02.830502 polkitd[2075]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:25:02.834284 polkitd[2075]: Finished loading, compiling and executing 2 rules Apr 21 10:25:02.834960 dbus-daemon[1954]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:25:02.835158 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:25:02.853011 polkitd[2075]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:25:02.877491 update-ssh-keys[2094]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:25:02.878986 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:25:02.887264 systemd[1]: Finished sshkeys.service. Apr 21 10:25:02.895970 systemd-hostnamed[2000]: Hostname set to (transient) Apr 21 10:25:02.896096 systemd-resolved[1902]: System hostname changed to 'ip-172-31-16-209'. Apr 21 10:25:03.071027 containerd[1990]: time="2026-04-21T10:25:03.069494799Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:25:03.184684 ntpd[1958]: bind(24) AF_INET6 fe80::40e:28ff:fe4f:d31%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:25:03.185099 ntpd[1958]: 21 Apr 10:25:03 ntpd[1958]: bind(24) AF_INET6 fe80::40e:28ff:fe4f:d31%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:25:03.185099 ntpd[1958]: 21 Apr 10:25:03 ntpd[1958]: unable to create socket on eth0 (6) for fe80::40e:28ff:fe4f:d31%2#123 Apr 21 10:25:03.185099 ntpd[1958]: 21 Apr 10:25:03 ntpd[1958]: failed to init interface for address fe80::40e:28ff:fe4f:d31%2 Apr 21 10:25:03.184747 ntpd[1958]: unable to create socket on eth0 (6) for fe80::40e:28ff:fe4f:d31%2#123 Apr 21 10:25:03.184764 ntpd[1958]: failed to init interface for address fe80::40e:28ff:fe4f:d31%2 Apr 21 10:25:03.202813 containerd[1990]: time="2026-04-21T10:25:03.202607185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.207643170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.207695358Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.207723514Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.207975863Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.208001780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.208072387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:25:03.208174 containerd[1990]: time="2026-04-21T10:25:03.208090299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.208736 containerd[1990]: time="2026-04-21T10:25:03.208692430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.209811799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.209850747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.209868591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210004380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210274584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210475330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210497451Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210599701Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:25:03.210696 containerd[1990]: time="2026-04-21T10:25:03.210655313Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:25:03.218283 containerd[1990]: time="2026-04-21T10:25:03.218010816Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:25:03.218283 containerd[1990]: time="2026-04-21T10:25:03.218119938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:25:03.218283 containerd[1990]: time="2026-04-21T10:25:03.218145250Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:25:03.218283 containerd[1990]: time="2026-04-21T10:25:03.218218621Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:25:03.218283 containerd[1990]: time="2026-04-21T10:25:03.218246022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.219817561Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220344856Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220508180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220532976Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220554656Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220579636Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220599831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220620512Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220642188Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220664191Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220683590Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.220809 containerd[1990]: time="2026-04-21T10:25:03.220749513Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.220769497Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222842242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222869479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222889540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222911894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222932147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222954687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222973764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.222994850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223014459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223037579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223061470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223080680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223108241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.224638 containerd[1990]: time="2026-04-21T10:25:03.223132304Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223164240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223184582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223204149Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223281216Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223309156Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223398558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223417959Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223433348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223451151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223466115Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:25:03.225297 containerd[1990]: time="2026-04-21T10:25:03.223480568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:25:03.225707 containerd[1990]: time="2026-04-21T10:25:03.223918158Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:25:03.225707 containerd[1990]: time="2026-04-21T10:25:03.224008047Z" level=info msg="Connect containerd service" Apr 21 10:25:03.225707 containerd[1990]: time="2026-04-21T10:25:03.224056509Z" level=info msg="using legacy CRI server" Apr 21 10:25:03.225707 containerd[1990]: time="2026-04-21T10:25:03.224066346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:25:03.225707 containerd[1990]: time="2026-04-21T10:25:03.224220041Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:25:03.229446 containerd[1990]: time="2026-04-21T10:25:03.227284833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:25:03.229446 containerd[1990]: time="2026-04-21T10:25:03.227687657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:25:03.229446 containerd[1990]: time="2026-04-21T10:25:03.227746193Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.232840933Z" level=info msg="Start subscribing containerd event" Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.232928177Z" level=info msg="Start recovering state" Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.233103391Z" level=info msg="Start event monitor" Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.233125809Z" level=info msg="Start snapshots syncer" Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.233142380Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:25:03.233264 containerd[1990]: time="2026-04-21T10:25:03.233160900Z" level=info msg="Start streaming server" Apr 21 10:25:03.233889 containerd[1990]: time="2026-04-21T10:25:03.233859854Z" level=info msg="containerd successfully booted in 0.170339s" Apr 21 10:25:03.234049 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:25:03.524941 tar[1982]: linux-amd64/README.md Apr 21 10:25:03.541127 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:25:03.573914 systemd-networkd[1896]: eth0: Gained IPv6LL Apr 21 10:25:03.577163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:25:03.580717 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:25:03.591037 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:25:03.602328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:03.609533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:25:03.680324 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:25:03.684713 sshd_keygen[1993]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:25:03.701823 amazon-ssm-agent[2160]: Initializing new seelog logger Apr 21 10:25:03.701823 amazon-ssm-agent[2160]: New Seelog Logger Creation Complete Apr 21 10:25:03.701823 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.701823 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.701823 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 processing appconfig overrides Apr 21 10:25:03.702748 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.702849 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.703020 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 processing appconfig overrides Apr 21 10:25:03.703408 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.703476 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.703605 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 processing appconfig overrides Apr 21 10:25:03.705171 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO Proxy environment variables: Apr 21 10:25:03.710989 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.711124 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:25:03.712401 amazon-ssm-agent[2160]: 2026/04/21 10:25:03 processing appconfig overrides Apr 21 10:25:03.721698 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:25:03.733189 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:25:03.745984 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:25:03.746261 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:25:03.756169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:25:03.784882 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:25:03.796638 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:25:03.806214 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO https_proxy: Apr 21 10:25:03.806998 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:25:03.808175 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:25:03.903329 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO http_proxy: Apr 21 10:25:04.002251 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO no_proxy: Apr 21 10:25:04.100912 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:25:04.143515 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:25:04.143515 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO Agent will take identity from EC2 Apr 21 10:25:04.143515 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:25:04.143515 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:25:04.143515 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [Registrar] Starting registrar module Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:04 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:04 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:04 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:25:04.143769 amazon-ssm-agent[2160]: 2026-04-21 10:25:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:25:04.200112 amazon-ssm-agent[2160]: 2026-04-21 10:25:04 INFO [CredentialRefresher] Next credential rotation will be in 30.76666005785 minutes Apr 21 10:25:05.159218 amazon-ssm-agent[2160]: 2026-04-21 10:25:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:25:05.259829 amazon-ssm-agent[2160]: 2026-04-21 10:25:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2196) started Apr 21 10:25:05.360022 amazon-ssm-agent[2160]: 2026-04-21 10:25:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:25:05.682364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:05.683719 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:25:05.685636 systemd[1]: Startup finished in 619ms (kernel) + 5.884s (initrd) + 8.145s (userspace) = 14.649s. Apr 21 10:25:05.691327 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:25:06.184642 ntpd[1958]: Listen normally on 7 eth0 [fe80::40e:28ff:fe4f:d31%2]:123 Apr 21 10:25:06.185202 ntpd[1958]: 21 Apr 10:25:06 ntpd[1958]: Listen normally on 7 eth0 [fe80::40e:28ff:fe4f:d31%2]:123 Apr 21 10:25:06.642552 kubelet[2212]: E0421 10:25:06.642394 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:25:06.645027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:25:06.645234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:25:06.645913 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Apr 21 10:25:07.155080 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:25:07.160217 systemd[1]: Started sshd@0-172.31.16.209:22-50.85.169.122:56728.service - OpenSSH per-connection server daemon (50.85.169.122:56728). Apr 21 10:25:08.185574 sshd[2224]: Accepted publickey for core from 50.85.169.122 port 56728 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:08.187640 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:08.198924 systemd-logind[1964]: New session 1 of user core. Apr 21 10:25:08.200898 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:25:08.206597 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:25:08.224907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:25:08.232207 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:25:08.237580 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:25:08.362377 systemd[2228]: Queued start job for default target default.target. Apr 21 10:25:08.369689 systemd[2228]: Created slice app.slice - User Application Slice. Apr 21 10:25:08.369743 systemd[2228]: Reached target paths.target - Paths. Apr 21 10:25:08.369765 systemd[2228]: Reached target timers.target - Timers. Apr 21 10:25:08.371226 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:25:08.392051 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:25:08.392222 systemd[2228]: Reached target sockets.target - Sockets. Apr 21 10:25:08.392245 systemd[2228]: Reached target basic.target - Basic System. Apr 21 10:25:08.392297 systemd[2228]: Reached target default.target - Main User Target. Apr 21 10:25:08.392328 systemd[2228]: Startup finished in 146ms. Apr 21 10:25:08.392889 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:25:08.400112 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:25:09.116192 systemd[1]: Started sshd@1-172.31.16.209:22-50.85.169.122:60378.service - OpenSSH per-connection server daemon (50.85.169.122:60378). Apr 21 10:25:10.385475 systemd-resolved[1902]: Clock change detected. Flushing caches. Apr 21 10:25:11.319271 sshd[2239]: Accepted publickey for core from 50.85.169.122 port 60378 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:11.320961 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:11.326196 systemd-logind[1964]: New session 2 of user core. Apr 21 10:25:11.337126 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:25:12.018471 sshd[2239]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:12.022113 systemd[1]: sshd@1-172.31.16.209:22-50.85.169.122:60378.service: Deactivated successfully. Apr 21 10:25:12.024236 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:25:12.025922 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:25:12.027242 systemd-logind[1964]: Removed session 2. Apr 21 10:25:12.185522 systemd[1]: Started sshd@2-172.31.16.209:22-50.85.169.122:60386.service - OpenSSH per-connection server daemon (50.85.169.122:60386). Apr 21 10:25:13.179377 sshd[2246]: Accepted publickey for core from 50.85.169.122 port 60386 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:13.181086 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:13.186228 systemd-logind[1964]: New session 3 of user core. Apr 21 10:25:13.196024 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:25:13.860161 sshd[2246]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:13.865004 systemd[1]: sshd@2-172.31.16.209:22-50.85.169.122:60386.service: Deactivated successfully. Apr 21 10:25:13.866707 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:25:13.867092 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:25:13.868810 systemd-logind[1964]: Removed session 3. Apr 21 10:25:14.030214 systemd[1]: Started sshd@3-172.31.16.209:22-50.85.169.122:60402.service - OpenSSH per-connection server daemon (50.85.169.122:60402). Apr 21 10:25:15.026028 sshd[2253]: Accepted publickey for core from 50.85.169.122 port 60402 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:15.027633 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:15.032812 systemd-logind[1964]: New session 4 of user core. Apr 21 10:25:15.040059 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:25:15.712373 sshd[2253]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:15.716904 systemd[1]: sshd@3-172.31.16.209:22-50.85.169.122:60402.service: Deactivated successfully. Apr 21 10:25:15.719281 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:25:15.720044 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:25:15.722229 systemd-logind[1964]: Removed session 4. Apr 21 10:25:15.882332 systemd[1]: Started sshd@4-172.31.16.209:22-50.85.169.122:60418.service - OpenSSH per-connection server daemon (50.85.169.122:60418). Apr 21 10:25:16.871272 sshd[2260]: Accepted publickey for core from 50.85.169.122 port 60418 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:16.873017 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:16.878280 systemd-logind[1964]: New session 5 of user core. Apr 21 10:25:16.884087 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:25:17.415506 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:25:17.415963 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:25:17.428675 sudo[2263]: pam_unix(sudo:session): session closed for user root Apr 21 10:25:17.589625 sshd[2260]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:17.593530 systemd[1]: sshd@4-172.31.16.209:22-50.85.169.122:60418.service: Deactivated successfully. Apr 21 10:25:17.595671 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:25:17.597866 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:25:17.599226 systemd-logind[1964]: Removed session 5. Apr 21 10:25:17.764157 systemd[1]: Started sshd@5-172.31.16.209:22-50.85.169.122:60430.service - OpenSSH per-connection server daemon (50.85.169.122:60430). Apr 21 10:25:18.095823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:25:18.104825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:18.329283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:18.340301 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:25:18.384884 kubelet[2278]: E0421 10:25:18.384128 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:25:18.388292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:25:18.388495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:25:18.751883 sshd[2268]: Accepted publickey for core from 50.85.169.122 port 60430 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:18.753308 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:18.758479 systemd-logind[1964]: New session 6 of user core. Apr 21 10:25:18.762002 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:25:19.277206 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:25:19.277622 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:25:19.281648 sudo[2287]: pam_unix(sudo:session): session closed for user root Apr 21 10:25:19.287389 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:25:19.287856 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:25:19.303594 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:25:19.305478 auditctl[2290]: No rules Apr 21 10:25:19.306001 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:25:19.306231 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:25:19.309117 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:25:19.340318 augenrules[2308]: No rules Apr 21 10:25:19.341858 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:25:19.343731 sudo[2286]: pam_unix(sudo:session): session closed for user root Apr 21 10:25:19.504815 sshd[2268]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:19.509684 systemd[1]: sshd@5-172.31.16.209:22-50.85.169.122:60430.service: Deactivated successfully. Apr 21 10:25:19.509823 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:25:19.512075 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:25:19.513523 systemd-logind[1964]: Removed session 6. Apr 21 10:25:19.688341 systemd[1]: Started sshd@6-172.31.16.209:22-50.85.169.122:57508.service - OpenSSH per-connection server daemon (50.85.169.122:57508). Apr 21 10:25:20.705206 sshd[2316]: Accepted publickey for core from 50.85.169.122 port 57508 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:25:20.706695 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:20.712407 systemd-logind[1964]: New session 7 of user core. Apr 21 10:25:20.719040 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:25:21.242437 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:25:21.242858 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:25:21.875530 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:25:21.875727 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:25:22.371222 dockerd[2335]: time="2026-04-21T10:25:22.371089241Z" level=info msg="Starting up" Apr 21 10:25:22.490972 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2787861565-merged.mount: Deactivated successfully. Apr 21 10:25:22.528973 dockerd[2335]: time="2026-04-21T10:25:22.528921259Z" level=info msg="Loading containers: start." Apr 21 10:25:22.651820 kernel: Initializing XFRM netlink socket Apr 21 10:25:22.684267 (udev-worker)[2359]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:25:22.741488 systemd-networkd[1896]: docker0: Link UP Apr 21 10:25:22.761669 dockerd[2335]: time="2026-04-21T10:25:22.761618528Z" level=info msg="Loading containers: done." Apr 21 10:25:22.786398 dockerd[2335]: time="2026-04-21T10:25:22.786302702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:25:22.786599 dockerd[2335]: time="2026-04-21T10:25:22.786459268Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:25:22.786649 dockerd[2335]: time="2026-04-21T10:25:22.786604698Z" level=info msg="Daemon has completed initialization" Apr 21 10:25:22.824405 dockerd[2335]: time="2026-04-21T10:25:22.824337217Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:25:22.825023 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:25:23.742052 containerd[1990]: time="2026-04-21T10:25:23.742002993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:25:24.330923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212332022.mount: Deactivated successfully. Apr 21 10:25:25.823498 containerd[1990]: time="2026-04-21T10:25:25.823442116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:25.824998 containerd[1990]: time="2026-04-21T10:25:25.824897188Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579423" Apr 21 10:25:25.826717 containerd[1990]: time="2026-04-21T10:25:25.826323922Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:25.829746 containerd[1990]: time="2026-04-21T10:25:25.829699849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:25.830907 containerd[1990]: time="2026-04-21T10:25:25.830865402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.088820345s" Apr 21 10:25:25.831007 containerd[1990]: time="2026-04-21T10:25:25.830917704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:25:25.831517 containerd[1990]: time="2026-04-21T10:25:25.831488767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:25:27.383167 containerd[1990]: time="2026-04-21T10:25:27.383104005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:27.384820 containerd[1990]: time="2026-04-21T10:25:27.384735029Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451659" Apr 21 10:25:27.386518 containerd[1990]: time="2026-04-21T10:25:27.386444253Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:27.391116 containerd[1990]: time="2026-04-21T10:25:27.391045694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:27.392256 containerd[1990]: time="2026-04-21T10:25:27.392210660Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.560678584s" Apr 21 10:25:27.392770 containerd[1990]: time="2026-04-21T10:25:27.392396546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:25:27.393218 containerd[1990]: time="2026-04-21T10:25:27.393186284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:25:28.420897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:25:28.430138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:28.700072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:28.704735 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:25:28.783727 kubelet[2551]: E0421 10:25:28.783652 2551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:25:28.787746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:25:28.788539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:25:28.872708 containerd[1990]: time="2026-04-21T10:25:28.872653708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:28.874081 containerd[1990]: time="2026-04-21T10:25:28.874023445Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555290" Apr 21 10:25:28.875583 containerd[1990]: time="2026-04-21T10:25:28.875169412Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:28.878448 containerd[1990]: time="2026-04-21T10:25:28.878389328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:28.879994 containerd[1990]: time="2026-04-21T10:25:28.879565689Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.486341573s" Apr 21 10:25:28.879994 containerd[1990]: time="2026-04-21T10:25:28.879609781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:25:28.880428 containerd[1990]: time="2026-04-21T10:25:28.880404529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:25:30.034551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014810150.mount: Deactivated successfully. Apr 21 10:25:30.421054 containerd[1990]: time="2026-04-21T10:25:30.421006789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:30.422377 containerd[1990]: time="2026-04-21T10:25:30.422216621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699925" Apr 21 10:25:30.423885 containerd[1990]: time="2026-04-21T10:25:30.423844761Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:30.426950 containerd[1990]: time="2026-04-21T10:25:30.426749844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:30.427984 containerd[1990]: time="2026-04-21T10:25:30.427443943Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.546833054s" Apr 21 10:25:30.427984 containerd[1990]: time="2026-04-21T10:25:30.427488740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:25:30.428325 containerd[1990]: time="2026-04-21T10:25:30.428297698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:25:30.957627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427691124.mount: Deactivated successfully. Apr 21 10:25:32.384593 containerd[1990]: time="2026-04-21T10:25:32.384370174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.386123 containerd[1990]: time="2026-04-21T10:25:32.386069006Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Apr 21 10:25:32.387235 containerd[1990]: time="2026-04-21T10:25:32.386959451Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.390556 containerd[1990]: time="2026-04-21T10:25:32.390490499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.392637 containerd[1990]: time="2026-04-21T10:25:32.391695626Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.963362391s" Apr 21 10:25:32.392637 containerd[1990]: time="2026-04-21T10:25:32.391744509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:25:32.392637 containerd[1990]: time="2026-04-21T10:25:32.392348928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:25:32.905937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701609207.mount: Deactivated successfully. Apr 21 10:25:32.915402 containerd[1990]: time="2026-04-21T10:25:32.915344487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.916708 containerd[1990]: time="2026-04-21T10:25:32.916564045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 21 10:25:32.918301 containerd[1990]: time="2026-04-21T10:25:32.918207234Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.925791 containerd[1990]: time="2026-04-21T10:25:32.925336579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:32.926495 containerd[1990]: time="2026-04-21T10:25:32.926458561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 534.074954ms" Apr 21 10:25:32.926653 containerd[1990]: time="2026-04-21T10:25:32.926630501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:25:32.927564 containerd[1990]: time="2026-04-21T10:25:32.927504117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:25:33.506746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216200338.mount: Deactivated successfully. Apr 21 10:25:34.127857 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:25:35.141124 containerd[1990]: time="2026-04-21T10:25:35.139752483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:35.180295 containerd[1990]: time="2026-04-21T10:25:35.180205810Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644465" Apr 21 10:25:35.249420 containerd[1990]: time="2026-04-21T10:25:35.247579623Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:35.372059 containerd[1990]: time="2026-04-21T10:25:35.372001223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:35.373928 containerd[1990]: time="2026-04-21T10:25:35.373645611Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.446103458s" Apr 21 10:25:35.373928 containerd[1990]: time="2026-04-21T10:25:35.373716977Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:25:36.868382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:36.876139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:36.922370 systemd[1]: Reloading requested from client PID 2716 ('systemctl') (unit session-7.scope)... Apr 21 10:25:36.922392 systemd[1]: Reloading... Apr 21 10:25:37.076814 zram_generator::config[2756]: No configuration found. Apr 21 10:25:37.226340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:25:37.312942 systemd[1]: Reloading finished in 389 ms. Apr 21 10:25:37.371070 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:25:37.371185 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:25:37.371541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:37.378340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:37.589367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:37.603263 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:25:37.653157 kubelet[2820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:25:37.875235 kubelet[2820]: I0421 10:25:37.875100 2820 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:25:37.875235 kubelet[2820]: I0421 10:25:37.875148 2820 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:25:37.875235 kubelet[2820]: I0421 10:25:37.875170 2820 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:25:37.875235 kubelet[2820]: I0421 10:25:37.875177 2820 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:25:37.875848 kubelet[2820]: I0421 10:25:37.875601 2820 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:25:37.886617 kubelet[2820]: I0421 10:25:37.886578 2820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:25:37.890421 kubelet[2820]: E0421 10:25:37.890388 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.209:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:25:37.895976 kubelet[2820]: E0421 10:25:37.895651 2820 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:25:37.895976 kubelet[2820]: I0421 10:25:37.895712 2820 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:25:37.906355 kubelet[2820]: I0421 10:25:37.906319 2820 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:25:37.909782 kubelet[2820]: I0421 10:25:37.909701 2820 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:25:37.910024 kubelet[2820]: I0421 10:25:37.909788 2820 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:25:37.910160 kubelet[2820]: I0421 10:25:37.910026 2820 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:25:37.910160 kubelet[2820]: I0421 10:25:37.910042 2820 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:25:37.910242 kubelet[2820]: I0421 10:25:37.910192 2820 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:25:37.912666 kubelet[2820]: I0421 10:25:37.912631 2820 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:25:37.912888 kubelet[2820]: I0421 10:25:37.912862 2820 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:25:37.912948 kubelet[2820]: I0421 10:25:37.912889 2820 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:25:37.912948 kubelet[2820]: I0421 10:25:37.912924 2820 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:25:37.912948 kubelet[2820]: I0421 10:25:37.912937 2820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:25:37.917460 kubelet[2820]: I0421 10:25:37.916918 2820 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:25:37.919862 kubelet[2820]: I0421 10:25:37.919838 2820 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:25:37.920044 kubelet[2820]: I0421 10:25:37.920032 2820 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:25:37.920181 kubelet[2820]: W0421 10:25:37.920156 2820 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:25:37.924868 kubelet[2820]: I0421 10:25:37.924043 2820 server.go:1257] "Started kubelet" Apr 21 10:25:37.926513 kubelet[2820]: I0421 10:25:37.926460 2820 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:25:37.937009 kubelet[2820]: I0421 10:25:37.936871 2820 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:25:37.937330 kubelet[2820]: I0421 10:25:37.937185 2820 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:25:37.937493 kubelet[2820]: I0421 10:25:37.937428 2820 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:25:37.937929 kubelet[2820]: I0421 10:25:37.937909 2820 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:25:37.947177 kubelet[2820]: I0421 10:25:37.947148 2820 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:25:37.947661 kubelet[2820]: E0421 10:25:37.945224 2820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.209:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-209.18a8584c588c3ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-209,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-209,},FirstTimestamp:2026-04-21 10:25:37.924005618 +0000 UTC m=+0.315820162,LastTimestamp:2026-04-21 10:25:37.924005618 +0000 UTC m=+0.315820162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-209,}" Apr 21 10:25:37.949092 kubelet[2820]: I0421 10:25:37.948604 2820 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:25:37.949861 kubelet[2820]: I0421 10:25:37.949838 2820 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:25:37.950214 kubelet[2820]: E0421 10:25:37.950191 2820 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:37.951988 kubelet[2820]: I0421 10:25:37.951970 2820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:25:37.952087 kubelet[2820]: I0421 10:25:37.952071 2820 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:25:37.953290 kubelet[2820]: E0421 10:25:37.953235 2820 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="200ms" Apr 21 10:25:37.953594 kubelet[2820]: I0421 10:25:37.953559 2820 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:25:37.953695 kubelet[2820]: I0421 10:25:37.953676 2820 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:25:37.956069 kubelet[2820]: I0421 10:25:37.956049 2820 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:25:37.961840 kubelet[2820]: E0421 10:25:37.961808 2820 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:25:37.977653 kubelet[2820]: I0421 10:25:37.977622 2820 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:25:37.977653 kubelet[2820]: I0421 10:25:37.977640 2820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:25:37.977653 kubelet[2820]: I0421 10:25:37.977661 2820 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:25:37.980856 kubelet[2820]: I0421 10:25:37.980828 2820 policy_none.go:50] "Start" Apr 21 10:25:37.980856 kubelet[2820]: I0421 10:25:37.980851 2820 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:25:37.981016 kubelet[2820]: I0421 10:25:37.980868 2820 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:25:37.983242 kubelet[2820]: I0421 10:25:37.983215 2820 policy_none.go:44] "Start" Apr 21 10:25:37.987897 kubelet[2820]: I0421 10:25:37.987853 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:25:37.989986 kubelet[2820]: I0421 10:25:37.989956 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:25:37.989986 kubelet[2820]: I0421 10:25:37.989981 2820 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:25:37.990161 kubelet[2820]: I0421 10:25:37.990008 2820 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:25:37.990161 kubelet[2820]: E0421 10:25:37.990059 2820 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:25:37.998910 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:25:38.013545 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:25:38.017874 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:25:38.029045 kubelet[2820]: E0421 10:25:38.029010 2820 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:25:38.029284 kubelet[2820]: I0421 10:25:38.029265 2820 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:25:38.029361 kubelet[2820]: I0421 10:25:38.029283 2820 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:25:38.029801 kubelet[2820]: I0421 10:25:38.029646 2820 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:25:38.030939 kubelet[2820]: E0421 10:25:38.030919 2820 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:25:38.031800 kubelet[2820]: E0421 10:25:38.031247 2820 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-209\" not found" Apr 21 10:25:38.103499 systemd[1]: Created slice kubepods-burstable-pod6f7d3e81e22aff8e5df2762ed73ab4ad.slice - libcontainer container kubepods-burstable-pod6f7d3e81e22aff8e5df2762ed73ab4ad.slice. Apr 21 10:25:38.119862 kubelet[2820]: E0421 10:25:38.119553 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:38.122916 systemd[1]: Created slice kubepods-burstable-pod71e9d53fe2aad458d0076ed1ac1b1d68.slice - libcontainer container kubepods-burstable-pod71e9d53fe2aad458d0076ed1ac1b1d68.slice. Apr 21 10:25:38.125904 kubelet[2820]: E0421 10:25:38.125742 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:38.130531 systemd[1]: Created slice kubepods-burstable-pod47d624cfec07e230b5065a08565d950a.slice - libcontainer container kubepods-burstable-pod47d624cfec07e230b5065a08565d950a.slice. Apr 21 10:25:38.132977 kubelet[2820]: E0421 10:25:38.132946 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:38.133309 kubelet[2820]: I0421 10:25:38.133287 2820 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:38.133657 kubelet[2820]: E0421 10:25:38.133633 2820 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Apr 21 10:25:38.154118 kubelet[2820]: I0421 10:25:38.153323 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47d624cfec07e230b5065a08565d950a-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-209\" (UID: \"47d624cfec07e230b5065a08565d950a\") " pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:38.154118 kubelet[2820]: I0421 10:25:38.153376 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-ca-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:38.154118 kubelet[2820]: I0421 10:25:38.153425 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:38.154118 kubelet[2820]: I0421 10:25:38.153446 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:38.154118 kubelet[2820]: I0421 10:25:38.153469 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:38.154409 kubelet[2820]: I0421 10:25:38.153489 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:38.154409 kubelet[2820]: I0421 10:25:38.153509 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:38.154409 kubelet[2820]: I0421 10:25:38.153545 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:38.154409 kubelet[2820]: I0421 10:25:38.153599 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:38.154409 kubelet[2820]: E0421 10:25:38.153952 2820 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="400ms" Apr 21 10:25:38.335273 kubelet[2820]: I0421 10:25:38.335241 2820 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:38.335642 kubelet[2820]: E0421 10:25:38.335590 2820 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Apr 21 10:25:38.424477 containerd[1990]: time="2026-04-21T10:25:38.424289387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-209,Uid:6f7d3e81e22aff8e5df2762ed73ab4ad,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:38.433551 containerd[1990]: time="2026-04-21T10:25:38.433497359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-209,Uid:71e9d53fe2aad458d0076ed1ac1b1d68,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:38.436770 containerd[1990]: time="2026-04-21T10:25:38.436716813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-209,Uid:47d624cfec07e230b5065a08565d950a,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:38.555225 kubelet[2820]: E0421 10:25:38.555175 2820 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="800ms" Apr 21 10:25:38.739861 kubelet[2820]: I0421 10:25:38.738351 2820 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:38.740718 kubelet[2820]: E0421 10:25:38.740522 2820 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Apr 21 10:25:38.927119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470651422.mount: Deactivated successfully. Apr 21 10:25:38.937029 containerd[1990]: time="2026-04-21T10:25:38.936974271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:25:38.938464 containerd[1990]: time="2026-04-21T10:25:38.938418194Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:25:38.939633 containerd[1990]: time="2026-04-21T10:25:38.939561352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 21 10:25:38.940912 containerd[1990]: time="2026-04-21T10:25:38.940865524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:25:38.941998 containerd[1990]: time="2026-04-21T10:25:38.941959499Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:25:38.943049 containerd[1990]: time="2026-04-21T10:25:38.943008541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:25:38.944219 containerd[1990]: time="2026-04-21T10:25:38.944162771Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:25:38.947821 containerd[1990]: time="2026-04-21T10:25:38.947733854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:25:38.948813 containerd[1990]: time="2026-04-21T10:25:38.948730557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.192485ms" Apr 21 10:25:38.951221 containerd[1990]: time="2026-04-21T10:25:38.951177171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.586138ms" Apr 21 10:25:38.954622 containerd[1990]: time="2026-04-21T10:25:38.954576467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.768434ms" Apr 21 10:25:39.184317 containerd[1990]: time="2026-04-21T10:25:39.184101732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:39.184317 containerd[1990]: time="2026-04-21T10:25:39.184194255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:39.184317 containerd[1990]: time="2026-04-21T10:25:39.184220195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.184710 containerd[1990]: time="2026-04-21T10:25:39.184419260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.194448 containerd[1990]: time="2026-04-21T10:25:39.194119783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:39.194448 containerd[1990]: time="2026-04-21T10:25:39.194206035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:39.194448 containerd[1990]: time="2026-04-21T10:25:39.194224662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.194448 containerd[1990]: time="2026-04-21T10:25:39.194329010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.196896 containerd[1990]: time="2026-04-21T10:25:39.196711716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:39.197194 containerd[1990]: time="2026-04-21T10:25:39.196982282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:39.197564 containerd[1990]: time="2026-04-21T10:25:39.197514660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.198617 containerd[1990]: time="2026-04-21T10:25:39.198155336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:39.226078 systemd[1]: Started cri-containerd-a0c9a14ff5b0b5efbf1cef226cfea4ebc5c83f6ea9239e7997f920e8a275d0f8.scope - libcontainer container a0c9a14ff5b0b5efbf1cef226cfea4ebc5c83f6ea9239e7997f920e8a275d0f8. Apr 21 10:25:39.245415 systemd[1]: Started cri-containerd-a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c.scope - libcontainer container a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c. Apr 21 10:25:39.251799 systemd[1]: Started cri-containerd-ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2.scope - libcontainer container ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2. Apr 21 10:25:39.349404 containerd[1990]: time="2026-04-21T10:25:39.349353639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-209,Uid:6f7d3e81e22aff8e5df2762ed73ab4ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0c9a14ff5b0b5efbf1cef226cfea4ebc5c83f6ea9239e7997f920e8a275d0f8\"" Apr 21 10:25:39.350982 containerd[1990]: time="2026-04-21T10:25:39.350944713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-209,Uid:47d624cfec07e230b5065a08565d950a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2\"" Apr 21 10:25:39.356426 kubelet[2820]: E0421 10:25:39.356293 2820 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="1.6s" Apr 21 10:25:39.361388 containerd[1990]: time="2026-04-21T10:25:39.361346519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-209,Uid:71e9d53fe2aad458d0076ed1ac1b1d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c\"" Apr 21 10:25:39.364692 containerd[1990]: time="2026-04-21T10:25:39.364378555Z" level=info msg="CreateContainer within sandbox \"ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:25:39.366898 containerd[1990]: time="2026-04-21T10:25:39.366862323Z" level=info msg="CreateContainer within sandbox \"a0c9a14ff5b0b5efbf1cef226cfea4ebc5c83f6ea9239e7997f920e8a275d0f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:25:39.385026 containerd[1990]: time="2026-04-21T10:25:39.384846361Z" level=info msg="CreateContainer within sandbox \"a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:25:39.420088 containerd[1990]: time="2026-04-21T10:25:39.420038363Z" level=info msg="CreateContainer within sandbox \"ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366\"" Apr 21 10:25:39.421862 containerd[1990]: time="2026-04-21T10:25:39.421823013Z" level=info msg="CreateContainer within sandbox \"a0c9a14ff5b0b5efbf1cef226cfea4ebc5c83f6ea9239e7997f920e8a275d0f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93a58e48054d2d08e2bbc67f47a06bd8fdba906448bd78a65ddc842d245c20b7\"" Apr 21 10:25:39.422394 containerd[1990]: time="2026-04-21T10:25:39.422359666Z" level=info msg="StartContainer for \"93a58e48054d2d08e2bbc67f47a06bd8fdba906448bd78a65ddc842d245c20b7\"" Apr 21 10:25:39.425549 containerd[1990]: time="2026-04-21T10:25:39.423940046Z" level=info msg="CreateContainer within sandbox \"a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2\"" Apr 21 10:25:39.425549 containerd[1990]: time="2026-04-21T10:25:39.424108683Z" level=info msg="StartContainer for \"ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366\"" Apr 21 10:25:39.436158 containerd[1990]: time="2026-04-21T10:25:39.436058732Z" level=info msg="StartContainer for \"8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2\"" Apr 21 10:25:39.466003 systemd[1]: Started cri-containerd-93a58e48054d2d08e2bbc67f47a06bd8fdba906448bd78a65ddc842d245c20b7.scope - libcontainer container 93a58e48054d2d08e2bbc67f47a06bd8fdba906448bd78a65ddc842d245c20b7. Apr 21 10:25:39.474991 systemd[1]: Started cri-containerd-ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366.scope - libcontainer container ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366. Apr 21 10:25:39.507002 systemd[1]: Started cri-containerd-8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2.scope - libcontainer container 8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2. Apr 21 10:25:39.547826 kubelet[2820]: I0421 10:25:39.547797 2820 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:39.548754 kubelet[2820]: E0421 10:25:39.548705 2820 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Apr 21 10:25:39.558740 containerd[1990]: time="2026-04-21T10:25:39.558697744Z" level=info msg="StartContainer for \"93a58e48054d2d08e2bbc67f47a06bd8fdba906448bd78a65ddc842d245c20b7\" returns successfully" Apr 21 10:25:39.595100 containerd[1990]: time="2026-04-21T10:25:39.594950089Z" level=info msg="StartContainer for \"8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2\" returns successfully" Apr 21 10:25:39.633723 containerd[1990]: time="2026-04-21T10:25:39.633678621Z" level=info msg="StartContainer for \"ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366\" returns successfully" Apr 21 10:25:40.012458 kubelet[2820]: E0421 10:25:40.011908 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:40.015508 kubelet[2820]: E0421 10:25:40.015263 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:40.021007 kubelet[2820]: E0421 10:25:40.019724 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:41.021707 kubelet[2820]: E0421 10:25:41.021609 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:41.023216 kubelet[2820]: E0421 10:25:41.023180 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:41.024222 kubelet[2820]: E0421 10:25:41.024196 2820 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:41.150484 kubelet[2820]: I0421 10:25:41.150451 2820 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:41.402872 kubelet[2820]: E0421 10:25:41.401629 2820 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Apr 21 10:25:41.519030 kubelet[2820]: I0421 10:25:41.518987 2820 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-16-209" Apr 21 10:25:41.519030 kubelet[2820]: E0421 10:25:41.519034 2820 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-209\": node \"ip-172-31-16-209\" not found" Apr 21 10:25:41.542141 kubelet[2820]: E0421 10:25:41.542070 2820 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:41.642834 kubelet[2820]: E0421 10:25:41.642793 2820 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:41.743635 kubelet[2820]: E0421 10:25:41.743587 2820 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:41.844773 kubelet[2820]: E0421 10:25:41.844710 2820 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:41.918095 kubelet[2820]: I0421 10:25:41.918061 2820 apiserver.go:52] "Watching apiserver" Apr 21 10:25:41.951965 kubelet[2820]: I0421 10:25:41.951900 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:41.952462 kubelet[2820]: I0421 10:25:41.952170 2820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:25:41.967252 kubelet[2820]: E0421 10:25:41.967211 2820 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:41.967252 kubelet[2820]: I0421 10:25:41.967242 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:41.969498 kubelet[2820]: E0421 10:25:41.969470 2820 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:41.970397 kubelet[2820]: I0421 10:25:41.969711 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:41.972386 kubelet[2820]: E0421 10:25:41.972284 2820 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:42.022018 kubelet[2820]: I0421 10:25:42.021541 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:42.023687 kubelet[2820]: I0421 10:25:42.023614 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:42.025372 kubelet[2820]: E0421 10:25:42.025343 2820 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:42.026159 kubelet[2820]: E0421 10:25:42.026134 2820 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:43.488163 kubelet[2820]: I0421 10:25:43.488126 2820 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:43.588867 systemd[1]: Reloading requested from client PID 3097 ('systemctl') (unit session-7.scope)... Apr 21 10:25:43.588884 systemd[1]: Reloading... Apr 21 10:25:43.692810 zram_generator::config[3133]: No configuration found. Apr 21 10:25:43.833273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:25:43.936098 systemd[1]: Reloading finished in 346 ms. Apr 21 10:25:43.979778 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:43.991409 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:25:43.991663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:44.001905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:25:44.223977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:25:44.233514 (kubelet)[3197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:25:44.313421 kubelet[3197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:25:44.324726 kubelet[3197]: I0421 10:25:44.324661 3197 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:25:44.324726 kubelet[3197]: I0421 10:25:44.324715 3197 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:25:44.326510 kubelet[3197]: I0421 10:25:44.326481 3197 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:25:44.326510 kubelet[3197]: I0421 10:25:44.326504 3197 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:25:44.326911 kubelet[3197]: I0421 10:25:44.326887 3197 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:25:44.328116 kubelet[3197]: I0421 10:25:44.328081 3197 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:25:44.333514 kubelet[3197]: I0421 10:25:44.333484 3197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:25:44.342930 kubelet[3197]: E0421 10:25:44.342867 3197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:25:44.343088 kubelet[3197]: I0421 10:25:44.342965 3197 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:25:44.345647 kubelet[3197]: I0421 10:25:44.345333 3197 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:25:44.346626 kubelet[3197]: I0421 10:25:44.346576 3197 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:25:44.346839 kubelet[3197]: I0421 10:25:44.346622 3197 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:25:44.346999 kubelet[3197]: I0421 10:25:44.346847 3197 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:25:44.346999 kubelet[3197]: I0421 10:25:44.346863 3197 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:25:44.346999 kubelet[3197]: I0421 10:25:44.346895 3197 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:25:44.349107 kubelet[3197]: I0421 10:25:44.349081 3197 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:25:44.350646 kubelet[3197]: I0421 10:25:44.350622 3197 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:25:44.350646 kubelet[3197]: I0421 10:25:44.350649 3197 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:25:44.352165 kubelet[3197]: I0421 10:25:44.350678 3197 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:25:44.352165 kubelet[3197]: I0421 10:25:44.350690 3197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:25:44.362552 kubelet[3197]: I0421 10:25:44.361458 3197 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:25:44.366508 kubelet[3197]: I0421 10:25:44.366475 3197 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:25:44.366655 kubelet[3197]: I0421 10:25:44.366523 3197 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:25:44.371231 kubelet[3197]: I0421 10:25:44.369743 3197 server.go:1257] "Started kubelet" Apr 21 10:25:44.379778 kubelet[3197]: I0421 10:25:44.377602 3197 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:25:44.382486 kubelet[3197]: I0421 10:25:44.382445 3197 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:25:44.389030 kubelet[3197]: I0421 10:25:44.386885 3197 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:25:44.389030 kubelet[3197]: I0421 10:25:44.386973 3197 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:25:44.389030 kubelet[3197]: I0421 10:25:44.387234 3197 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:25:44.393249 kubelet[3197]: I0421 10:25:44.392821 3197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:25:44.399962 kubelet[3197]: I0421 10:25:44.394855 3197 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:25:44.400502 kubelet[3197]: I0421 10:25:44.394869 3197 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:25:44.400502 kubelet[3197]: E0421 10:25:44.395027 3197 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-16-209\" not found" Apr 21 10:25:44.401448 kubelet[3197]: I0421 10:25:44.400634 3197 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:25:44.402743 kubelet[3197]: I0421 10:25:44.402716 3197 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:25:44.407650 kubelet[3197]: I0421 10:25:44.407535 3197 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:25:44.407805 kubelet[3197]: I0421 10:25:44.407648 3197 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:25:44.421953 kubelet[3197]: I0421 10:25:44.419364 3197 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:25:44.442508 kubelet[3197]: I0421 10:25:44.442356 3197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:25:44.447787 kubelet[3197]: I0421 10:25:44.447577 3197 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:25:44.447787 kubelet[3197]: I0421 10:25:44.447600 3197 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:25:44.447787 kubelet[3197]: I0421 10:25:44.447621 3197 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:25:44.447787 kubelet[3197]: E0421 10:25:44.447663 3197 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496028 3197 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496046 3197 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496069 3197 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496230 3197 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496244 3197 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496267 3197 policy_none.go:50] "Start" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496338 3197 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:25:44.496657 kubelet[3197]: I0421 10:25:44.496360 3197 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:25:44.500996 kubelet[3197]: I0421 10:25:44.500024 3197 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:25:44.500996 kubelet[3197]: I0421 10:25:44.500061 3197 policy_none.go:44] "Start" Apr 21 10:25:44.508269 kubelet[3197]: E0421 10:25:44.508240 3197 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:25:44.510135 kubelet[3197]: I0421 10:25:44.508925 3197 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:25:44.510135 kubelet[3197]: I0421 10:25:44.508944 3197 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:25:44.510135 kubelet[3197]: I0421 10:25:44.509372 3197 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:25:44.513259 kubelet[3197]: E0421 10:25:44.513234 3197 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:25:44.552652 kubelet[3197]: I0421 10:25:44.552585 3197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:44.553716 kubelet[3197]: I0421 10:25:44.553694 3197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.564992 kubelet[3197]: I0421 10:25:44.553251 3197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:44.569008 kubelet[3197]: E0421 10:25:44.568964 3197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-209\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.602437 kubelet[3197]: I0421 10:25:44.602395 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.602672 kubelet[3197]: I0421 10:25:44.602640 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.603558 kubelet[3197]: I0421 10:25:44.603531 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-ca-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:44.603707 kubelet[3197]: I0421 10:25:44.603695 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:44.603906 kubelet[3197]: I0421 10:25:44.603885 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.604029 kubelet[3197]: I0421 10:25:44.604013 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.604162 kubelet[3197]: I0421 10:25:44.604145 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71e9d53fe2aad458d0076ed1ac1b1d68-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"71e9d53fe2aad458d0076ed1ac1b1d68\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Apr 21 10:25:44.604542 kubelet[3197]: I0421 10:25:44.604521 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47d624cfec07e230b5065a08565d950a-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-209\" (UID: \"47d624cfec07e230b5065a08565d950a\") " pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:44.604709 kubelet[3197]: I0421 10:25:44.604692 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d3e81e22aff8e5df2762ed73ab4ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"6f7d3e81e22aff8e5df2762ed73ab4ad\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Apr 21 10:25:44.637242 kubelet[3197]: I0421 10:25:44.637208 3197 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-16-209" Apr 21 10:25:44.649450 kubelet[3197]: I0421 10:25:44.649423 3197 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-16-209" Apr 21 10:25:44.649691 kubelet[3197]: I0421 10:25:44.649680 3197 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-16-209" Apr 21 10:25:45.357101 kubelet[3197]: I0421 10:25:45.357059 3197 apiserver.go:52] "Watching apiserver" Apr 21 10:25:45.400611 kubelet[3197]: I0421 10:25:45.400575 3197 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:25:45.469454 kubelet[3197]: I0421 10:25:45.469421 3197 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:45.491677 kubelet[3197]: E0421 10:25:45.491638 3197 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-209\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-209" Apr 21 10:25:45.530047 kubelet[3197]: I0421 10:25:45.529969 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-209" podStartSLOduration=1.529954561 podStartE2EDuration="1.529954561s" podCreationTimestamp="2026-04-21 10:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:45.516564479 +0000 UTC m=+1.276735041" watchObservedRunningTime="2026-04-21 10:25:45.529954561 +0000 UTC m=+1.290125117" Apr 21 10:25:45.544879 kubelet[3197]: I0421 10:25:45.543566 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-209" podStartSLOduration=2.543526103 podStartE2EDuration="2.543526103s" podCreationTimestamp="2026-04-21 10:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:45.543431082 +0000 UTC m=+1.303601642" watchObservedRunningTime="2026-04-21 10:25:45.543526103 +0000 UTC m=+1.303696663" Apr 21 10:25:45.544879 kubelet[3197]: I0421 10:25:45.543786 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-209" podStartSLOduration=1.5437794299999998 podStartE2EDuration="1.54377943s" podCreationTimestamp="2026-04-21 10:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:45.530608861 +0000 UTC m=+1.290779420" watchObservedRunningTime="2026-04-21 10:25:45.54377943 +0000 UTC m=+1.303949982" Apr 21 10:25:48.239868 update_engine[1969]: I20260421 10:25:48.239794 1969 update_attempter.cc:509] Updating boot flags... Apr 21 10:25:48.297826 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3262) Apr 21 10:25:48.512834 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3265) Apr 21 10:25:49.440811 kubelet[3197]: I0421 10:25:49.440777 3197 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:25:49.442055 containerd[1990]: time="2026-04-21T10:25:49.441896970Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:25:49.442967 kubelet[3197]: I0421 10:25:49.442223 3197 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:25:50.097538 systemd[1]: Created slice kubepods-besteffort-pode4810841_9eec_483b_b48f_ee31faabc56b.slice - libcontainer container kubepods-besteffort-pode4810841_9eec_483b_b48f_ee31faabc56b.slice. Apr 21 10:25:50.142820 kubelet[3197]: I0421 10:25:50.142754 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4810841-9eec-483b-b48f-ee31faabc56b-kube-proxy\") pod \"kube-proxy-68jvt\" (UID: \"e4810841-9eec-483b-b48f-ee31faabc56b\") " pod="kube-system/kube-proxy-68jvt" Apr 21 10:25:50.142999 kubelet[3197]: I0421 10:25:50.142827 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4810841-9eec-483b-b48f-ee31faabc56b-lib-modules\") pod \"kube-proxy-68jvt\" (UID: \"e4810841-9eec-483b-b48f-ee31faabc56b\") " pod="kube-system/kube-proxy-68jvt" Apr 21 10:25:50.142999 kubelet[3197]: I0421 10:25:50.142855 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9527v\" (UniqueName: \"kubernetes.io/projected/e4810841-9eec-483b-b48f-ee31faabc56b-kube-api-access-9527v\") pod \"kube-proxy-68jvt\" (UID: \"e4810841-9eec-483b-b48f-ee31faabc56b\") " pod="kube-system/kube-proxy-68jvt" Apr 21 10:25:50.142999 kubelet[3197]: I0421 10:25:50.142887 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4810841-9eec-483b-b48f-ee31faabc56b-xtables-lock\") pod \"kube-proxy-68jvt\" (UID: \"e4810841-9eec-483b-b48f-ee31faabc56b\") " pod="kube-system/kube-proxy-68jvt" Apr 21 10:25:50.417035 containerd[1990]: time="2026-04-21T10:25:50.416927874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68jvt,Uid:e4810841-9eec-483b-b48f-ee31faabc56b,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:50.456994 containerd[1990]: time="2026-04-21T10:25:50.456038758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:50.456994 containerd[1990]: time="2026-04-21T10:25:50.456148246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:50.456994 containerd[1990]: time="2026-04-21T10:25:50.456166122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:50.456994 containerd[1990]: time="2026-04-21T10:25:50.456359623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:50.496088 systemd[1]: Started cri-containerd-9a142a74103e5c6ed6f0bb81cec8838ef969386a3b3afa3bb01f7808a07a1ade.scope - libcontainer container 9a142a74103e5c6ed6f0bb81cec8838ef969386a3b3afa3bb01f7808a07a1ade. Apr 21 10:25:50.523387 containerd[1990]: time="2026-04-21T10:25:50.523338977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68jvt,Uid:e4810841-9eec-483b-b48f-ee31faabc56b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a142a74103e5c6ed6f0bb81cec8838ef969386a3b3afa3bb01f7808a07a1ade\"" Apr 21 10:25:50.532439 containerd[1990]: time="2026-04-21T10:25:50.531891262Z" level=info msg="CreateContainer within sandbox \"9a142a74103e5c6ed6f0bb81cec8838ef969386a3b3afa3bb01f7808a07a1ade\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:25:50.563032 containerd[1990]: time="2026-04-21T10:25:50.562976302Z" level=info msg="CreateContainer within sandbox \"9a142a74103e5c6ed6f0bb81cec8838ef969386a3b3afa3bb01f7808a07a1ade\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"217bef48c7c53a3a251a5bebc6bac4aa8ac06ada98bbff20671b7df74a92d6ad\"" Apr 21 10:25:50.563821 containerd[1990]: time="2026-04-21T10:25:50.563694641Z" level=info msg="StartContainer for \"217bef48c7c53a3a251a5bebc6bac4aa8ac06ada98bbff20671b7df74a92d6ad\"" Apr 21 10:25:50.601174 systemd[1]: Started cri-containerd-217bef48c7c53a3a251a5bebc6bac4aa8ac06ada98bbff20671b7df74a92d6ad.scope - libcontainer container 217bef48c7c53a3a251a5bebc6bac4aa8ac06ada98bbff20671b7df74a92d6ad. Apr 21 10:25:50.678454 systemd[1]: Created slice kubepods-besteffort-pod1443aaa4_e9de_44db_887b_690ca39cc085.slice - libcontainer container kubepods-besteffort-pod1443aaa4_e9de_44db_887b_690ca39cc085.slice. Apr 21 10:25:50.691369 containerd[1990]: time="2026-04-21T10:25:50.690969040Z" level=info msg="StartContainer for \"217bef48c7c53a3a251a5bebc6bac4aa8ac06ada98bbff20671b7df74a92d6ad\" returns successfully" Apr 21 10:25:50.748754 kubelet[3197]: I0421 10:25:50.748623 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1443aaa4-e9de-44db-887b-690ca39cc085-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-rqgpz\" (UID: \"1443aaa4-e9de-44db-887b-690ca39cc085\") " pod="tigera-operator/tigera-operator-6cf4cccc57-rqgpz" Apr 21 10:25:50.748754 kubelet[3197]: I0421 10:25:50.748681 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vzg\" (UniqueName: \"kubernetes.io/projected/1443aaa4-e9de-44db-887b-690ca39cc085-kube-api-access-p6vzg\") pod \"tigera-operator-6cf4cccc57-rqgpz\" (UID: \"1443aaa4-e9de-44db-887b-690ca39cc085\") " pod="tigera-operator/tigera-operator-6cf4cccc57-rqgpz" Apr 21 10:25:50.990497 containerd[1990]: time="2026-04-21T10:25:50.990342803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-rqgpz,Uid:1443aaa4-e9de-44db-887b-690ca39cc085,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:25:51.030344 containerd[1990]: time="2026-04-21T10:25:51.030193128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:51.030344 containerd[1990]: time="2026-04-21T10:25:51.030250868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:51.030344 containerd[1990]: time="2026-04-21T10:25:51.030266532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:51.030744 containerd[1990]: time="2026-04-21T10:25:51.030388973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:51.052451 systemd[1]: Started cri-containerd-4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1.scope - libcontainer container 4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1. Apr 21 10:25:51.118137 containerd[1990]: time="2026-04-21T10:25:51.117817882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-rqgpz,Uid:1443aaa4-e9de-44db-887b-690ca39cc085,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1\"" Apr 21 10:25:51.153641 containerd[1990]: time="2026-04-21T10:25:51.153358036Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:25:51.269054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818498717.mount: Deactivated successfully. Apr 21 10:25:51.515013 kubelet[3197]: I0421 10:25:51.514951 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-68jvt" podStartSLOduration=1.514926965 podStartE2EDuration="1.514926965s" podCreationTimestamp="2026-04-21 10:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:51.507177723 +0000 UTC m=+7.267348286" watchObservedRunningTime="2026-04-21 10:25:51.514926965 +0000 UTC m=+7.275097525" Apr 21 10:25:52.384953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848969513.mount: Deactivated successfully. Apr 21 10:25:53.741905 containerd[1990]: time="2026-04-21T10:25:53.741855317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:53.743168 containerd[1990]: time="2026-04-21T10:25:53.743100223Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:25:53.744559 containerd[1990]: time="2026-04-21T10:25:53.744500543Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:53.748464 containerd[1990]: time="2026-04-21T10:25:53.747388733Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:53.748464 containerd[1990]: time="2026-04-21T10:25:53.748139595Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.594716944s" Apr 21 10:25:53.748464 containerd[1990]: time="2026-04-21T10:25:53.748176535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:25:53.754744 containerd[1990]: time="2026-04-21T10:25:53.754541476Z" level=info msg="CreateContainer within sandbox \"4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:25:53.770936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441116654.mount: Deactivated successfully. Apr 21 10:25:53.771742 containerd[1990]: time="2026-04-21T10:25:53.771581406Z" level=info msg="CreateContainer within sandbox \"4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8\"" Apr 21 10:25:53.774709 containerd[1990]: time="2026-04-21T10:25:53.773255385Z" level=info msg="StartContainer for \"dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8\"" Apr 21 10:25:53.809069 systemd[1]: run-containerd-runc-k8s.io-dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8-runc.xLUJqD.mount: Deactivated successfully. Apr 21 10:25:53.822035 systemd[1]: Started cri-containerd-dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8.scope - libcontainer container dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8. Apr 21 10:25:53.853751 containerd[1990]: time="2026-04-21T10:25:53.853628554Z" level=info msg="StartContainer for \"dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8\" returns successfully" Apr 21 10:25:56.488217 kubelet[3197]: I0421 10:25:56.487635 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-rqgpz" podStartSLOduration=3.890263011 podStartE2EDuration="6.487614048s" podCreationTimestamp="2026-04-21 10:25:50 +0000 UTC" firstStartedPulling="2026-04-21 10:25:51.152061279 +0000 UTC m=+6.912231830" lastFinishedPulling="2026-04-21 10:25:53.749412316 +0000 UTC m=+9.509582867" observedRunningTime="2026-04-21 10:25:54.530728222 +0000 UTC m=+10.290898781" watchObservedRunningTime="2026-04-21 10:25:56.487614048 +0000 UTC m=+12.247784613" Apr 21 10:26:01.042745 sudo[2319]: pam_unix(sudo:session): session closed for user root Apr 21 10:26:01.210637 sshd[2316]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:01.215602 systemd[1]: sshd@6-172.31.16.209:22-50.85.169.122:57508.service: Deactivated successfully. Apr 21 10:26:01.219515 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:26:01.219918 systemd[1]: session-7.scope: Consumed 4.279s CPU time, 152.0M memory peak, 0B memory swap peak. Apr 21 10:26:01.221728 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:26:01.226130 systemd-logind[1964]: Removed session 7. Apr 21 10:26:03.869487 systemd[1]: Created slice kubepods-besteffort-pod35d4f13c_76e8_4f3f_8d3e_4abb8f5c83ae.slice - libcontainer container kubepods-besteffort-pod35d4f13c_76e8_4f3f_8d3e_4abb8f5c83ae.slice. Apr 21 10:26:03.957710 kubelet[3197]: I0421 10:26:03.957504 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae-tigera-ca-bundle\") pod \"calico-typha-784b8797bb-8t5l9\" (UID: \"35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae\") " pod="calico-system/calico-typha-784b8797bb-8t5l9" Apr 21 10:26:03.957710 kubelet[3197]: I0421 10:26:03.957561 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae-typha-certs\") pod \"calico-typha-784b8797bb-8t5l9\" (UID: \"35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae\") " pod="calico-system/calico-typha-784b8797bb-8t5l9" Apr 21 10:26:03.957710 kubelet[3197]: I0421 10:26:03.957590 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plm7\" (UniqueName: \"kubernetes.io/projected/35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae-kube-api-access-2plm7\") pod \"calico-typha-784b8797bb-8t5l9\" (UID: \"35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae\") " pod="calico-system/calico-typha-784b8797bb-8t5l9" Apr 21 10:26:04.105465 systemd[1]: Created slice kubepods-besteffort-podab3ee8fd_a176_4738_8aea_b5d4ed79099a.slice - libcontainer container kubepods-besteffort-podab3ee8fd_a176_4738_8aea_b5d4ed79099a.slice. Apr 21 10:26:04.160334 kubelet[3197]: I0421 10:26:04.159113 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-tigera-ca-bundle\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160334 kubelet[3197]: I0421 10:26:04.159158 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-sys-fs\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160334 kubelet[3197]: I0421 10:26:04.159184 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-bpffs\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160334 kubelet[3197]: I0421 10:26:04.159205 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-cni-net-dir\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160334 kubelet[3197]: I0421 10:26:04.159225 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-nodeproc\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160654 kubelet[3197]: I0421 10:26:04.159248 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-cni-log-dir\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160654 kubelet[3197]: I0421 10:26:04.159272 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-flexvol-driver-host\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160654 kubelet[3197]: I0421 10:26:04.159294 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-policysync\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160654 kubelet[3197]: I0421 10:26:04.159355 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-var-run-calico\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160654 kubelet[3197]: I0421 10:26:04.159379 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-xtables-lock\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160914 kubelet[3197]: I0421 10:26:04.159404 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-cni-bin-dir\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160914 kubelet[3197]: I0421 10:26:04.159424 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgcn\" (UniqueName: \"kubernetes.io/projected/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-kube-api-access-9xgcn\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160914 kubelet[3197]: I0421 10:26:04.159445 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-lib-modules\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160914 kubelet[3197]: I0421 10:26:04.159463 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-var-lib-calico\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.160914 kubelet[3197]: I0421 10:26:04.159487 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab3ee8fd-a176-4738-8aea-b5d4ed79099a-node-certs\") pod \"calico-node-qzfzv\" (UID: \"ab3ee8fd-a176-4738-8aea-b5d4ed79099a\") " pod="calico-system/calico-node-qzfzv" Apr 21 10:26:04.179142 containerd[1990]: time="2026-04-21T10:26:04.179028545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784b8797bb-8t5l9,Uid:35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:04.255535 kubelet[3197]: E0421 10:26:04.253478 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:04.261786 kubelet[3197]: I0421 10:26:04.260119 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ef1622f-119b-4965-be74-eb6954ebbd5e-registration-dir\") pod \"csi-node-driver-fwns8\" (UID: \"9ef1622f-119b-4965-be74-eb6954ebbd5e\") " pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:04.261786 kubelet[3197]: I0421 10:26:04.260169 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrdw\" (UniqueName: \"kubernetes.io/projected/9ef1622f-119b-4965-be74-eb6954ebbd5e-kube-api-access-wmrdw\") pod \"csi-node-driver-fwns8\" (UID: \"9ef1622f-119b-4965-be74-eb6954ebbd5e\") " pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:04.261786 kubelet[3197]: I0421 10:26:04.260346 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ef1622f-119b-4965-be74-eb6954ebbd5e-kubelet-dir\") pod \"csi-node-driver-fwns8\" (UID: \"9ef1622f-119b-4965-be74-eb6954ebbd5e\") " pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:04.261786 kubelet[3197]: I0421 10:26:04.260375 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9ef1622f-119b-4965-be74-eb6954ebbd5e-socket-dir\") pod \"csi-node-driver-fwns8\" (UID: \"9ef1622f-119b-4965-be74-eb6954ebbd5e\") " pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:04.261786 kubelet[3197]: I0421 10:26:04.260422 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9ef1622f-119b-4965-be74-eb6954ebbd5e-varrun\") pod \"csi-node-driver-fwns8\" (UID: \"9ef1622f-119b-4965-be74-eb6954ebbd5e\") " pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:04.281401 kubelet[3197]: E0421 10:26:04.281352 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.281401 kubelet[3197]: W0421 10:26:04.281397 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.281664 kubelet[3197]: E0421 10:26:04.281441 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.296785 kubelet[3197]: E0421 10:26:04.295062 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.296785 kubelet[3197]: W0421 10:26:04.295092 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.296785 kubelet[3197]: E0421 10:26:04.295136 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.311861 kubelet[3197]: E0421 10:26:04.311013 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.311861 kubelet[3197]: W0421 10:26:04.311041 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.311861 kubelet[3197]: E0421 10:26:04.311070 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.338583 containerd[1990]: time="2026-04-21T10:26:04.338462659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:04.338583 containerd[1990]: time="2026-04-21T10:26:04.338549643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:04.339975 containerd[1990]: time="2026-04-21T10:26:04.339902096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:04.340319 containerd[1990]: time="2026-04-21T10:26:04.340260455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:04.362788 kubelet[3197]: E0421 10:26:04.362531 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.362788 kubelet[3197]: W0421 10:26:04.362556 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.362788 kubelet[3197]: E0421 10:26:04.362582 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.363209 kubelet[3197]: E0421 10:26:04.363127 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.363718 kubelet[3197]: W0421 10:26:04.363428 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.363718 kubelet[3197]: E0421 10:26:04.363464 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.364093 kubelet[3197]: E0421 10:26:04.364079 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.364493 kubelet[3197]: W0421 10:26:04.364254 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.364493 kubelet[3197]: E0421 10:26:04.364282 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.364975 kubelet[3197]: E0421 10:26:04.364960 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.365388 kubelet[3197]: W0421 10:26:04.365155 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.365388 kubelet[3197]: E0421 10:26:04.365181 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.366205 kubelet[3197]: E0421 10:26:04.365834 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.366205 kubelet[3197]: W0421 10:26:04.365850 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.366205 kubelet[3197]: E0421 10:26:04.365867 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.366680 kubelet[3197]: E0421 10:26:04.366581 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.367000 kubelet[3197]: W0421 10:26:04.366791 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.367000 kubelet[3197]: E0421 10:26:04.366814 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.369440 kubelet[3197]: E0421 10:26:04.369421 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.369568 kubelet[3197]: W0421 10:26:04.369551 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.369806 kubelet[3197]: E0421 10:26:04.369789 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.371258 kubelet[3197]: E0421 10:26:04.370830 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.371258 kubelet[3197]: W0421 10:26:04.370846 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.371258 kubelet[3197]: E0421 10:26:04.370864 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.372168 kubelet[3197]: E0421 10:26:04.371964 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.372168 kubelet[3197]: W0421 10:26:04.371979 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.372168 kubelet[3197]: E0421 10:26:04.371996 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.373349 kubelet[3197]: E0421 10:26:04.373015 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.373349 kubelet[3197]: W0421 10:26:04.373031 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.373349 kubelet[3197]: E0421 10:26:04.373049 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.374341 kubelet[3197]: E0421 10:26:04.374051 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.374341 kubelet[3197]: W0421 10:26:04.374066 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.374341 kubelet[3197]: E0421 10:26:04.374081 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.376375 kubelet[3197]: E0421 10:26:04.375970 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.376375 kubelet[3197]: W0421 10:26:04.375985 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.376375 kubelet[3197]: E0421 10:26:04.376002 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.378498 kubelet[3197]: E0421 10:26:04.377266 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.378498 kubelet[3197]: W0421 10:26:04.377283 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.378498 kubelet[3197]: E0421 10:26:04.377301 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.379036 kubelet[3197]: E0421 10:26:04.378902 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.379036 kubelet[3197]: W0421 10:26:04.378919 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.379036 kubelet[3197]: E0421 10:26:04.378935 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.379052 systemd[1]: Started cri-containerd-5b674cbb5b7c2e44b76007992eab61abd47948122a150063166b7f572d073891.scope - libcontainer container 5b674cbb5b7c2e44b76007992eab61abd47948122a150063166b7f572d073891. Apr 21 10:26:04.380550 kubelet[3197]: E0421 10:26:04.379570 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.380550 kubelet[3197]: W0421 10:26:04.379584 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.380550 kubelet[3197]: E0421 10:26:04.379599 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.381561 kubelet[3197]: E0421 10:26:04.381285 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.381561 kubelet[3197]: W0421 10:26:04.381304 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.381561 kubelet[3197]: E0421 10:26:04.381321 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.381808 kubelet[3197]: E0421 10:26:04.381796 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.383781 kubelet[3197]: W0421 10:26:04.382002 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.383781 kubelet[3197]: E0421 10:26:04.382049 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.384366 kubelet[3197]: E0421 10:26:04.384197 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.384366 kubelet[3197]: W0421 10:26:04.384214 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.384366 kubelet[3197]: E0421 10:26:04.384233 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.384677 kubelet[3197]: E0421 10:26:04.384632 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.384677 kubelet[3197]: W0421 10:26:04.384646 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.384677 kubelet[3197]: E0421 10:26:04.384661 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.385269 kubelet[3197]: E0421 10:26:04.385144 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.385269 kubelet[3197]: W0421 10:26:04.385158 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.385269 kubelet[3197]: E0421 10:26:04.385172 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.385719 kubelet[3197]: E0421 10:26:04.385573 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.385719 kubelet[3197]: W0421 10:26:04.385586 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.385719 kubelet[3197]: E0421 10:26:04.385600 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.388003 kubelet[3197]: E0421 10:26:04.387813 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.388003 kubelet[3197]: W0421 10:26:04.387833 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.388003 kubelet[3197]: E0421 10:26:04.387855 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.388700 kubelet[3197]: E0421 10:26:04.388685 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.388840 kubelet[3197]: W0421 10:26:04.388826 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.388941 kubelet[3197]: E0421 10:26:04.388927 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.389349 kubelet[3197]: E0421 10:26:04.389336 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.389474 kubelet[3197]: W0421 10:26:04.389430 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.389474 kubelet[3197]: E0421 10:26:04.389450 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.391892 kubelet[3197]: E0421 10:26:04.391871 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.392057 kubelet[3197]: W0421 10:26:04.391996 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.392057 kubelet[3197]: E0421 10:26:04.392020 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.418146 containerd[1990]: time="2026-04-21T10:26:04.417090428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qzfzv,Uid:ab3ee8fd-a176-4738-8aea-b5d4ed79099a,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:04.455176 kubelet[3197]: E0421 10:26:04.455026 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:04.455176 kubelet[3197]: W0421 10:26:04.455054 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:04.455176 kubelet[3197]: E0421 10:26:04.455081 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:04.589235 containerd[1990]: time="2026-04-21T10:26:04.589162986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784b8797bb-8t5l9,Uid:35d4f13c-76e8-4f3f-8d3e-4abb8f5c83ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b674cbb5b7c2e44b76007992eab61abd47948122a150063166b7f572d073891\"" Apr 21 10:26:04.596598 containerd[1990]: time="2026-04-21T10:26:04.596557125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:26:04.646529 containerd[1990]: time="2026-04-21T10:26:04.639043865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:04.646529 containerd[1990]: time="2026-04-21T10:26:04.639416304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:04.646529 containerd[1990]: time="2026-04-21T10:26:04.639447184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:04.646529 containerd[1990]: time="2026-04-21T10:26:04.640918387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:04.683034 systemd[1]: Started cri-containerd-b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027.scope - libcontainer container b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027. Apr 21 10:26:04.807533 containerd[1990]: time="2026-04-21T10:26:04.807469211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qzfzv,Uid:ab3ee8fd-a176-4738-8aea-b5d4ed79099a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\"" Apr 21 10:26:05.448592 kubelet[3197]: E0421 10:26:05.448045 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:06.325420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727972497.mount: Deactivated successfully. Apr 21 10:26:07.448298 kubelet[3197]: E0421 10:26:07.448235 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:08.989737 containerd[1990]: time="2026-04-21T10:26:08.989677746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:08.991204 containerd[1990]: time="2026-04-21T10:26:08.991032606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:26:08.992736 containerd[1990]: time="2026-04-21T10:26:08.992678920Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:09.005403 containerd[1990]: time="2026-04-21T10:26:09.004508766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:09.005403 containerd[1990]: time="2026-04-21T10:26:09.005245448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.408428365s" Apr 21 10:26:09.005403 containerd[1990]: time="2026-04-21T10:26:09.005285268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:26:09.022457 containerd[1990]: time="2026-04-21T10:26:09.022415840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:26:09.057793 containerd[1990]: time="2026-04-21T10:26:09.056476456Z" level=info msg="CreateContainer within sandbox \"5b674cbb5b7c2e44b76007992eab61abd47948122a150063166b7f572d073891\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:26:09.083044 containerd[1990]: time="2026-04-21T10:26:09.082981141Z" level=info msg="CreateContainer within sandbox \"5b674cbb5b7c2e44b76007992eab61abd47948122a150063166b7f572d073891\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46338e98e1b8d3f7ea62eeb2574c46edf600672f7add8ec157b00d6bb9651e3a\"" Apr 21 10:26:09.085180 containerd[1990]: time="2026-04-21T10:26:09.083845488Z" level=info msg="StartContainer for \"46338e98e1b8d3f7ea62eeb2574c46edf600672f7add8ec157b00d6bb9651e3a\"" Apr 21 10:26:09.212185 systemd[1]: Started cri-containerd-46338e98e1b8d3f7ea62eeb2574c46edf600672f7add8ec157b00d6bb9651e3a.scope - libcontainer container 46338e98e1b8d3f7ea62eeb2574c46edf600672f7add8ec157b00d6bb9651e3a. Apr 21 10:26:09.271849 containerd[1990]: time="2026-04-21T10:26:09.271608631Z" level=info msg="StartContainer for \"46338e98e1b8d3f7ea62eeb2574c46edf600672f7add8ec157b00d6bb9651e3a\" returns successfully" Apr 21 10:26:09.448856 kubelet[3197]: E0421 10:26:09.448431 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:09.700949 kubelet[3197]: E0421 10:26:09.700910 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.700949 kubelet[3197]: W0421 10:26:09.700939 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.700949 kubelet[3197]: E0421 10:26:09.700967 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.701297 kubelet[3197]: E0421 10:26:09.701275 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.701297 kubelet[3197]: W0421 10:26:09.701292 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.701464 kubelet[3197]: E0421 10:26:09.701309 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.701597 kubelet[3197]: E0421 10:26:09.701568 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.701597 kubelet[3197]: W0421 10:26:09.701584 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.701701 kubelet[3197]: E0421 10:26:09.701598 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.701934 kubelet[3197]: E0421 10:26:09.701914 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.701934 kubelet[3197]: W0421 10:26:09.701930 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.702063 kubelet[3197]: E0421 10:26:09.701946 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.702318 kubelet[3197]: E0421 10:26:09.702209 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.702318 kubelet[3197]: W0421 10:26:09.702223 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.702318 kubelet[3197]: E0421 10:26:09.702238 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.702593 kubelet[3197]: E0421 10:26:09.702575 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.702593 kubelet[3197]: W0421 10:26:09.702590 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.702731 kubelet[3197]: E0421 10:26:09.702606 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.702884 kubelet[3197]: E0421 10:26:09.702857 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.702884 kubelet[3197]: W0421 10:26:09.702872 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.703050 kubelet[3197]: E0421 10:26:09.702886 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.703144 kubelet[3197]: E0421 10:26:09.703100 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.703144 kubelet[3197]: W0421 10:26:09.703111 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.703144 kubelet[3197]: E0421 10:26:09.703124 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.703397 kubelet[3197]: E0421 10:26:09.703371 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.703397 kubelet[3197]: W0421 10:26:09.703395 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.703503 kubelet[3197]: E0421 10:26:09.703412 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.703788 kubelet[3197]: E0421 10:26:09.703648 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.703788 kubelet[3197]: W0421 10:26:09.703674 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.703788 kubelet[3197]: E0421 10:26:09.703688 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.704234 kubelet[3197]: E0421 10:26:09.703923 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.704234 kubelet[3197]: W0421 10:26:09.703933 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.704234 kubelet[3197]: E0421 10:26:09.703967 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.704409 kubelet[3197]: E0421 10:26:09.704322 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.704409 kubelet[3197]: W0421 10:26:09.704334 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.704409 kubelet[3197]: E0421 10:26:09.704347 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.704583 kubelet[3197]: E0421 10:26:09.704560 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.704583 kubelet[3197]: W0421 10:26:09.704572 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.704724 kubelet[3197]: E0421 10:26:09.704584 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.704840 kubelet[3197]: E0421 10:26:09.704813 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.704934 kubelet[3197]: W0421 10:26:09.704840 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.704934 kubelet[3197]: E0421 10:26:09.704856 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.705132 kubelet[3197]: E0421 10:26:09.705105 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.705132 kubelet[3197]: W0421 10:26:09.705122 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.705224 kubelet[3197]: E0421 10:26:09.705135 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.709561 kubelet[3197]: E0421 10:26:09.709527 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.709561 kubelet[3197]: W0421 10:26:09.709552 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.709561 kubelet[3197]: E0421 10:26:09.709577 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.709959 kubelet[3197]: E0421 10:26:09.709935 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.709959 kubelet[3197]: W0421 10:26:09.709953 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.710122 kubelet[3197]: E0421 10:26:09.709970 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.710275 kubelet[3197]: E0421 10:26:09.710254 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.710275 kubelet[3197]: W0421 10:26:09.710270 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.710403 kubelet[3197]: E0421 10:26:09.710284 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.710647 kubelet[3197]: E0421 10:26:09.710618 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.710647 kubelet[3197]: W0421 10:26:09.710635 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.710879 kubelet[3197]: E0421 10:26:09.710651 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.710997 kubelet[3197]: E0421 10:26:09.710941 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.710997 kubelet[3197]: W0421 10:26:09.710951 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.710997 kubelet[3197]: E0421 10:26:09.710967 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.711239 kubelet[3197]: E0421 10:26:09.711230 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.711317 kubelet[3197]: W0421 10:26:09.711240 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.711317 kubelet[3197]: E0421 10:26:09.711254 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.711541 kubelet[3197]: E0421 10:26:09.711520 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.711541 kubelet[3197]: W0421 10:26:09.711538 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.711739 kubelet[3197]: E0421 10:26:09.711569 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.711902 kubelet[3197]: E0421 10:26:09.711878 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.711902 kubelet[3197]: W0421 10:26:09.711893 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.712157 kubelet[3197]: E0421 10:26:09.711907 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.712373 kubelet[3197]: E0421 10:26:09.712358 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.712448 kubelet[3197]: W0421 10:26:09.712427 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.712508 kubelet[3197]: E0421 10:26:09.712448 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.713044 kubelet[3197]: E0421 10:26:09.712934 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.713044 kubelet[3197]: W0421 10:26:09.712949 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.713044 kubelet[3197]: E0421 10:26:09.712963 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.713325 kubelet[3197]: E0421 10:26:09.713306 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.713325 kubelet[3197]: W0421 10:26:09.713323 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.713427 kubelet[3197]: E0421 10:26:09.713338 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.713659 kubelet[3197]: E0421 10:26:09.713641 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.713659 kubelet[3197]: W0421 10:26:09.713655 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.713782 kubelet[3197]: E0421 10:26:09.713669 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.714149 kubelet[3197]: E0421 10:26:09.714131 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.714149 kubelet[3197]: W0421 10:26:09.714146 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.714266 kubelet[3197]: E0421 10:26:09.714160 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.714436 kubelet[3197]: E0421 10:26:09.714418 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.714436 kubelet[3197]: W0421 10:26:09.714433 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.714615 kubelet[3197]: E0421 10:26:09.714449 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.714684 kubelet[3197]: E0421 10:26:09.714667 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.714684 kubelet[3197]: W0421 10:26:09.714678 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.714886 kubelet[3197]: E0421 10:26:09.714692 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.714985 kubelet[3197]: E0421 10:26:09.714960 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.714985 kubelet[3197]: W0421 10:26:09.714974 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.715072 kubelet[3197]: E0421 10:26:09.714987 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.715363 kubelet[3197]: E0421 10:26:09.715346 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.715363 kubelet[3197]: W0421 10:26:09.715360 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.715502 kubelet[3197]: E0421 10:26:09.715374 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:09.715623 kubelet[3197]: E0421 10:26:09.715605 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:09.715623 kubelet[3197]: W0421 10:26:09.715619 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:09.715704 kubelet[3197]: E0421 10:26:09.715634 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.347858 kubelet[3197]: I0421 10:26:10.347703 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-784b8797bb-8t5l9" podStartSLOduration=2.923778619 podStartE2EDuration="7.347682928s" podCreationTimestamp="2026-04-21 10:26:03 +0000 UTC" firstStartedPulling="2026-04-21 10:26:04.59617292 +0000 UTC m=+20.356343457" lastFinishedPulling="2026-04-21 10:26:09.020077216 +0000 UTC m=+24.780247766" observedRunningTime="2026-04-21 10:26:09.619846582 +0000 UTC m=+25.380017143" watchObservedRunningTime="2026-04-21 10:26:10.347682928 +0000 UTC m=+26.107853489" Apr 21 10:26:10.612911 kubelet[3197]: E0421 10:26:10.612119 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.612911 kubelet[3197]: W0421 10:26:10.612150 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.612911 kubelet[3197]: E0421 10:26:10.612179 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.615267 kubelet[3197]: E0421 10:26:10.614172 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.615267 kubelet[3197]: W0421 10:26:10.614212 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.615267 kubelet[3197]: E0421 10:26:10.614239 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.615267 kubelet[3197]: E0421 10:26:10.614577 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.615267 kubelet[3197]: W0421 10:26:10.614642 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.615267 kubelet[3197]: E0421 10:26:10.614662 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.615902 kubelet[3197]: E0421 10:26:10.615647 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.615902 kubelet[3197]: W0421 10:26:10.615665 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.615902 kubelet[3197]: E0421 10:26:10.615683 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.616189 kubelet[3197]: E0421 10:26:10.616166 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.616189 kubelet[3197]: W0421 10:26:10.616183 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.616300 kubelet[3197]: E0421 10:26:10.616207 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.616811 kubelet[3197]: E0421 10:26:10.616753 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.616811 kubelet[3197]: W0421 10:26:10.616783 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.616811 kubelet[3197]: E0421 10:26:10.616809 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.617514 kubelet[3197]: E0421 10:26:10.617445 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.617514 kubelet[3197]: W0421 10:26:10.617459 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.617514 kubelet[3197]: E0421 10:26:10.617475 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.617926 kubelet[3197]: E0421 10:26:10.617845 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.617926 kubelet[3197]: W0421 10:26:10.617858 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.617926 kubelet[3197]: E0421 10:26:10.617872 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.618352 kubelet[3197]: E0421 10:26:10.618322 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.618352 kubelet[3197]: W0421 10:26:10.618338 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.618637 kubelet[3197]: E0421 10:26:10.618354 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.618905 kubelet[3197]: E0421 10:26:10.618888 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.618905 kubelet[3197]: W0421 10:26:10.618904 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.619139 kubelet[3197]: E0421 10:26:10.618920 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.620141 kubelet[3197]: E0421 10:26:10.619865 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.620141 kubelet[3197]: W0421 10:26:10.619883 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.620141 kubelet[3197]: E0421 10:26:10.619900 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.620313 kubelet[3197]: E0421 10:26:10.620154 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.620313 kubelet[3197]: W0421 10:26:10.620165 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.620313 kubelet[3197]: E0421 10:26:10.620179 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.620711 kubelet[3197]: E0421 10:26:10.620694 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.620711 kubelet[3197]: W0421 10:26:10.620709 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.620865 kubelet[3197]: E0421 10:26:10.620724 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.621070 kubelet[3197]: E0421 10:26:10.621052 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.621136 kubelet[3197]: W0421 10:26:10.621067 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.621136 kubelet[3197]: E0421 10:26:10.621085 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.621548 kubelet[3197]: E0421 10:26:10.621530 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.621548 kubelet[3197]: W0421 10:26:10.621545 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.621670 kubelet[3197]: E0421 10:26:10.621560 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.622300 kubelet[3197]: E0421 10:26:10.622131 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.622300 kubelet[3197]: W0421 10:26:10.622146 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.622300 kubelet[3197]: E0421 10:26:10.622171 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.622798 kubelet[3197]: E0421 10:26:10.622644 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.622798 kubelet[3197]: W0421 10:26:10.622658 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.622798 kubelet[3197]: E0421 10:26:10.622673 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.623346 kubelet[3197]: E0421 10:26:10.623230 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.623346 kubelet[3197]: W0421 10:26:10.623244 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.623346 kubelet[3197]: E0421 10:26:10.623268 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.623921 kubelet[3197]: E0421 10:26:10.623902 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.623921 kubelet[3197]: W0421 10:26:10.623918 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.624356 kubelet[3197]: E0421 10:26:10.623938 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.624356 kubelet[3197]: E0421 10:26:10.624349 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.624567 kubelet[3197]: W0421 10:26:10.624361 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.624567 kubelet[3197]: E0421 10:26:10.624377 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.624936 kubelet[3197]: E0421 10:26:10.624917 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.624936 kubelet[3197]: W0421 10:26:10.624933 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.625067 kubelet[3197]: E0421 10:26:10.624948 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.625471 kubelet[3197]: E0421 10:26:10.625452 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.625471 kubelet[3197]: W0421 10:26:10.625468 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.625603 kubelet[3197]: E0421 10:26:10.625483 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.625936 kubelet[3197]: E0421 10:26:10.625921 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.625936 kubelet[3197]: W0421 10:26:10.625936 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.626093 kubelet[3197]: E0421 10:26:10.625950 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.627120 kubelet[3197]: E0421 10:26:10.626972 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.627120 kubelet[3197]: W0421 10:26:10.626987 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.627120 kubelet[3197]: E0421 10:26:10.627002 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.627683 kubelet[3197]: E0421 10:26:10.627564 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.627895 kubelet[3197]: W0421 10:26:10.627868 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.628802 kubelet[3197]: E0421 10:26:10.628785 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.629377 kubelet[3197]: E0421 10:26:10.629337 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.629503 kubelet[3197]: W0421 10:26:10.629489 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.629817 kubelet[3197]: E0421 10:26:10.629551 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.630073 kubelet[3197]: E0421 10:26:10.630019 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.630506 kubelet[3197]: W0421 10:26:10.630030 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.630506 kubelet[3197]: E0421 10:26:10.630426 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.631168 kubelet[3197]: E0421 10:26:10.631118 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.631168 kubelet[3197]: W0421 10:26:10.631134 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.631168 kubelet[3197]: E0421 10:26:10.631152 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.632220 kubelet[3197]: E0421 10:26:10.631966 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.632220 kubelet[3197]: W0421 10:26:10.632052 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.632220 kubelet[3197]: E0421 10:26:10.632066 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.632728 kubelet[3197]: E0421 10:26:10.632515 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.632728 kubelet[3197]: W0421 10:26:10.632530 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.632728 kubelet[3197]: E0421 10:26:10.632544 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.633438 kubelet[3197]: E0421 10:26:10.633134 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.633438 kubelet[3197]: W0421 10:26:10.633169 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.633438 kubelet[3197]: E0421 10:26:10.633184 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.634175 kubelet[3197]: E0421 10:26:10.633946 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.634175 kubelet[3197]: W0421 10:26:10.633962 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.634175 kubelet[3197]: E0421 10:26:10.633976 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.634653 kubelet[3197]: E0421 10:26:10.634530 3197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:26:10.634653 kubelet[3197]: W0421 10:26:10.634545 3197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:26:10.634653 kubelet[3197]: E0421 10:26:10.634559 3197 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:26:10.727164 containerd[1990]: time="2026-04-21T10:26:10.727105538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:10.730193 containerd[1990]: time="2026-04-21T10:26:10.730130828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:26:10.733714 containerd[1990]: time="2026-04-21T10:26:10.733629892Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:10.739702 containerd[1990]: time="2026-04-21T10:26:10.739367087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:10.742316 containerd[1990]: time="2026-04-21T10:26:10.740500436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.717851411s" Apr 21 10:26:10.742316 containerd[1990]: time="2026-04-21T10:26:10.740545424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:26:10.748229 containerd[1990]: time="2026-04-21T10:26:10.748190689Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:26:10.802031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267047461.mount: Deactivated successfully. Apr 21 10:26:10.807152 containerd[1990]: time="2026-04-21T10:26:10.807103274Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3\"" Apr 21 10:26:10.807933 containerd[1990]: time="2026-04-21T10:26:10.807782503Z" level=info msg="StartContainer for \"ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3\"" Apr 21 10:26:10.849021 systemd[1]: Started cri-containerd-ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3.scope - libcontainer container ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3. Apr 21 10:26:10.880107 containerd[1990]: time="2026-04-21T10:26:10.879936293Z" level=info msg="StartContainer for \"ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3\" returns successfully" Apr 21 10:26:10.899174 systemd[1]: cri-containerd-ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3.scope: Deactivated successfully. Apr 21 10:26:11.006663 containerd[1990]: time="2026-04-21T10:26:10.999061221Z" level=info msg="shim disconnected" id=ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3 namespace=k8s.io Apr 21 10:26:11.006940 containerd[1990]: time="2026-04-21T10:26:11.006667827Z" level=warning msg="cleaning up after shim disconnected" id=ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3 namespace=k8s.io Apr 21 10:26:11.006940 containerd[1990]: time="2026-04-21T10:26:11.006687850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:26:11.037766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec9769a22c0625b8120f5727c01ea1d851d339704fc45224152e82b6972da6c3-rootfs.mount: Deactivated successfully. Apr 21 10:26:11.448449 kubelet[3197]: E0421 10:26:11.448402 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:11.612728 containerd[1990]: time="2026-04-21T10:26:11.612685060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:26:13.448564 kubelet[3197]: E0421 10:26:13.448312 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:15.448499 kubelet[3197]: E0421 10:26:15.447938 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:17.448394 kubelet[3197]: E0421 10:26:17.448299 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:19.448949 kubelet[3197]: E0421 10:26:19.448878 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:21.449098 kubelet[3197]: E0421 10:26:21.449035 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:23.449042 kubelet[3197]: E0421 10:26:23.448979 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:24.831231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357844921.mount: Deactivated successfully. Apr 21 10:26:24.889884 containerd[1990]: time="2026-04-21T10:26:24.885434648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:26:24.926985 containerd[1990]: time="2026-04-21T10:26:24.882560363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:24.926985 containerd[1990]: time="2026-04-21T10:26:24.919225430Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:24.926985 containerd[1990]: time="2026-04-21T10:26:24.923612203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:24.926985 containerd[1990]: time="2026-04-21T10:26:24.924721573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 13.311995812s" Apr 21 10:26:24.926985 containerd[1990]: time="2026-04-21T10:26:24.924771283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:26:24.932584 containerd[1990]: time="2026-04-21T10:26:24.932547863Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:26:24.963590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258916627.mount: Deactivated successfully. Apr 21 10:26:24.973364 containerd[1990]: time="2026-04-21T10:26:24.973305545Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739\"" Apr 21 10:26:24.974364 containerd[1990]: time="2026-04-21T10:26:24.974329530Z" level=info msg="StartContainer for \"f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739\"" Apr 21 10:26:25.154011 systemd[1]: Started cri-containerd-f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739.scope - libcontainer container f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739. Apr 21 10:26:25.199166 containerd[1990]: time="2026-04-21T10:26:25.199093156Z" level=info msg="StartContainer for \"f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739\" returns successfully" Apr 21 10:26:25.263500 systemd[1]: cri-containerd-f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739.scope: Deactivated successfully. Apr 21 10:26:25.313881 containerd[1990]: time="2026-04-21T10:26:25.310056335Z" level=info msg="shim disconnected" id=f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739 namespace=k8s.io Apr 21 10:26:25.314375 containerd[1990]: time="2026-04-21T10:26:25.314140232Z" level=warning msg="cleaning up after shim disconnected" id=f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739 namespace=k8s.io Apr 21 10:26:25.314375 containerd[1990]: time="2026-04-21T10:26:25.314166096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:26:25.329699 containerd[1990]: time="2026-04-21T10:26:25.329643535Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:26:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:26:25.448335 kubelet[3197]: E0421 10:26:25.448276 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:25.684501 containerd[1990]: time="2026-04-21T10:26:25.684438555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:26:25.831393 systemd[1]: run-containerd-runc-k8s.io-f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739-runc.u8M7kq.mount: Deactivated successfully. Apr 21 10:26:25.831932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f93b53eacabb5deea4910cd1e452b7d02433f1532d39149ad9a1108170c51739-rootfs.mount: Deactivated successfully. Apr 21 10:26:27.448373 kubelet[3197]: E0421 10:26:27.448304 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:29.450035 kubelet[3197]: E0421 10:26:29.448368 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:30.213373 containerd[1990]: time="2026-04-21T10:26:30.213316937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:30.214741 containerd[1990]: time="2026-04-21T10:26:30.214590792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:26:30.216438 containerd[1990]: time="2026-04-21T10:26:30.216151514Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:30.219183 containerd[1990]: time="2026-04-21T10:26:30.218924265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:30.219790 containerd[1990]: time="2026-04-21T10:26:30.219731311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.535237738s" Apr 21 10:26:30.220579 containerd[1990]: time="2026-04-21T10:26:30.219790148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:26:30.225722 containerd[1990]: time="2026-04-21T10:26:30.225684702Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:26:30.244161 containerd[1990]: time="2026-04-21T10:26:30.244101271Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a\"" Apr 21 10:26:30.244853 containerd[1990]: time="2026-04-21T10:26:30.244791949Z" level=info msg="StartContainer for \"5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a\"" Apr 21 10:26:30.287012 systemd[1]: Started cri-containerd-5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a.scope - libcontainer container 5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a. Apr 21 10:26:30.326744 containerd[1990]: time="2026-04-21T10:26:30.326694852Z" level=info msg="StartContainer for \"5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a\" returns successfully" Apr 21 10:26:31.448175 kubelet[3197]: E0421 10:26:31.448083 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:31.589789 systemd[1]: cri-containerd-5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a.scope: Deactivated successfully. Apr 21 10:26:31.633770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a-rootfs.mount: Deactivated successfully. Apr 21 10:26:31.658120 containerd[1990]: time="2026-04-21T10:26:31.658016256Z" level=info msg="shim disconnected" id=5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a namespace=k8s.io Apr 21 10:26:31.658120 containerd[1990]: time="2026-04-21T10:26:31.658110572Z" level=warning msg="cleaning up after shim disconnected" id=5c37ea52d6642842fc2796996f8d0a0c43f0cae9ad3c129517f290942f7d989a namespace=k8s.io Apr 21 10:26:31.658120 containerd[1990]: time="2026-04-21T10:26:31.658123577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:26:31.662272 kubelet[3197]: I0421 10:26:31.650155 3197 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:26:31.808284 containerd[1990]: time="2026-04-21T10:26:31.808119544Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:26:31.834189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568051818.mount: Deactivated successfully. Apr 21 10:26:31.839473 containerd[1990]: time="2026-04-21T10:26:31.839436162Z" level=info msg="CreateContainer within sandbox \"b30be4c4319edfc18812d1a4175a0d0845d1759c0818dc75846be740bcb62027\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127\"" Apr 21 10:26:31.842162 containerd[1990]: time="2026-04-21T10:26:31.841599479Z" level=info msg="StartContainer for \"81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127\"" Apr 21 10:26:31.891129 systemd[1]: Started cri-containerd-81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127.scope - libcontainer container 81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127. Apr 21 10:26:31.962945 containerd[1990]: time="2026-04-21T10:26:31.962080496Z" level=info msg="StartContainer for \"81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127\" returns successfully" Apr 21 10:26:32.007503 kubelet[3197]: I0421 10:26:32.006489 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee27081f-d3fb-48c1-8c12-8d86f1601923-config\") pod \"goldmane-9f7667bb8-sxk82\" (UID: \"ee27081f-d3fb-48c1-8c12-8d86f1601923\") " pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:32.007503 kubelet[3197]: I0421 10:26:32.006559 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfg8m\" (UniqueName: \"kubernetes.io/projected/ee27081f-d3fb-48c1-8c12-8d86f1601923-kube-api-access-qfg8m\") pod \"goldmane-9f7667bb8-sxk82\" (UID: \"ee27081f-d3fb-48c1-8c12-8d86f1601923\") " pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:32.007503 kubelet[3197]: I0421 10:26:32.006589 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb-config-volume\") pod \"coredns-7d764666f9-lw96w\" (UID: \"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb\") " pod="kube-system/coredns-7d764666f9-lw96w" Apr 21 10:26:32.007503 kubelet[3197]: I0421 10:26:32.006618 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn6jj\" (UniqueName: \"kubernetes.io/projected/40d3332a-bed0-42c4-9601-3b65769379ef-kube-api-access-vn6jj\") pod \"calico-apiserver-649f9d4c46-8gqsh\" (UID: \"40d3332a-bed0-42c4-9601-3b65769379ef\") " pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" Apr 21 10:26:32.007503 kubelet[3197]: I0421 10:26:32.006644 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8c9x\" (UniqueName: \"kubernetes.io/projected/7a282b52-42ea-480c-9fa0-f3ad8d196a94-kube-api-access-g8c9x\") pod \"calico-kube-controllers-55f8bbbb7b-2qs25\" (UID: \"7a282b52-42ea-480c-9fa0-f3ad8d196a94\") " pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" Apr 21 10:26:32.007869 kubelet[3197]: I0421 10:26:32.006679 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40d3332a-bed0-42c4-9601-3b65769379ef-calico-apiserver-certs\") pod \"calico-apiserver-649f9d4c46-8gqsh\" (UID: \"40d3332a-bed0-42c4-9601-3b65769379ef\") " pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" Apr 21 10:26:32.007869 kubelet[3197]: I0421 10:26:32.006744 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgbr\" (UniqueName: \"kubernetes.io/projected/5025f349-a8c7-438c-8426-5f946767fac4-kube-api-access-svgbr\") pod \"coredns-7d764666f9-4g4wj\" (UID: \"5025f349-a8c7-438c-8426-5f946767fac4\") " pod="kube-system/coredns-7d764666f9-4g4wj" Apr 21 10:26:32.007869 kubelet[3197]: I0421 10:26:32.006805 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-backend-key-pair\") pod \"whisker-7f86cd7d4b-c5tr7\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:32.007869 kubelet[3197]: I0421 10:26:32.006832 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5c26\" (UniqueName: \"kubernetes.io/projected/1970c874-1e64-4d7c-990c-50c703ddceae-kube-api-access-g5c26\") pod \"whisker-7f86cd7d4b-c5tr7\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:32.007869 kubelet[3197]: I0421 10:26:32.006869 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5025f349-a8c7-438c-8426-5f946767fac4-config-volume\") pod \"coredns-7d764666f9-4g4wj\" (UID: \"5025f349-a8c7-438c-8426-5f946767fac4\") " pod="kube-system/coredns-7d764666f9-4g4wj" Apr 21 10:26:32.008038 kubelet[3197]: I0421 10:26:32.006893 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce8a55a1-902f-4d92-8943-f6c346590495-calico-apiserver-certs\") pod \"calico-apiserver-649f9d4c46-7d8jv\" (UID: \"ce8a55a1-902f-4d92-8943-f6c346590495\") " pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" Apr 21 10:26:32.008038 kubelet[3197]: I0421 10:26:32.006917 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a282b52-42ea-480c-9fa0-f3ad8d196a94-tigera-ca-bundle\") pod \"calico-kube-controllers-55f8bbbb7b-2qs25\" (UID: \"7a282b52-42ea-480c-9fa0-f3ad8d196a94\") " pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" Apr 21 10:26:32.008038 kubelet[3197]: I0421 10:26:32.006945 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-nginx-config\") pod \"whisker-7f86cd7d4b-c5tr7\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:32.008038 kubelet[3197]: I0421 10:26:32.006969 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee27081f-d3fb-48c1-8c12-8d86f1601923-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-sxk82\" (UID: \"ee27081f-d3fb-48c1-8c12-8d86f1601923\") " pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:32.008038 kubelet[3197]: I0421 10:26:32.006993 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee27081f-d3fb-48c1-8c12-8d86f1601923-goldmane-key-pair\") pod \"goldmane-9f7667bb8-sxk82\" (UID: \"ee27081f-d3fb-48c1-8c12-8d86f1601923\") " pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:32.008218 kubelet[3197]: I0421 10:26:32.007017 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmmk\" (UniqueName: \"kubernetes.io/projected/b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb-kube-api-access-wtmmk\") pod \"coredns-7d764666f9-lw96w\" (UID: \"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb\") " pod="kube-system/coredns-7d764666f9-lw96w" Apr 21 10:26:32.008218 kubelet[3197]: I0421 10:26:32.007045 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-ca-bundle\") pod \"whisker-7f86cd7d4b-c5tr7\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:32.012923 kubelet[3197]: I0421 10:26:32.007069 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkh9f\" (UniqueName: \"kubernetes.io/projected/ce8a55a1-902f-4d92-8943-f6c346590495-kube-api-access-tkh9f\") pod \"calico-apiserver-649f9d4c46-7d8jv\" (UID: \"ce8a55a1-902f-4d92-8943-f6c346590495\") " pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" Apr 21 10:26:32.020879 systemd[1]: Created slice kubepods-burstable-podb63d72e0_ee8f_4d5f_bf38_cc48f379bdeb.slice - libcontainer container kubepods-burstable-podb63d72e0_ee8f_4d5f_bf38_cc48f379bdeb.slice. Apr 21 10:26:32.024710 systemd[1]: Created slice kubepods-burstable-pod5025f349_a8c7_438c_8426_5f946767fac4.slice - libcontainer container kubepods-burstable-pod5025f349_a8c7_438c_8426_5f946767fac4.slice. Apr 21 10:26:32.040571 systemd[1]: Created slice kubepods-besteffort-pod1970c874_1e64_4d7c_990c_50c703ddceae.slice - libcontainer container kubepods-besteffort-pod1970c874_1e64_4d7c_990c_50c703ddceae.slice. Apr 21 10:26:32.047786 systemd[1]: Created slice kubepods-besteffort-podee27081f_d3fb_48c1_8c12_8d86f1601923.slice - libcontainer container kubepods-besteffort-podee27081f_d3fb_48c1_8c12_8d86f1601923.slice. Apr 21 10:26:32.068532 systemd[1]: Created slice kubepods-besteffort-pod40d3332a_bed0_42c4_9601_3b65769379ef.slice - libcontainer container kubepods-besteffort-pod40d3332a_bed0_42c4_9601_3b65769379ef.slice. Apr 21 10:26:32.077714 systemd[1]: Created slice kubepods-besteffort-podce8a55a1_902f_4d92_8943_f6c346590495.slice - libcontainer container kubepods-besteffort-podce8a55a1_902f_4d92_8943_f6c346590495.slice. Apr 21 10:26:32.086936 systemd[1]: Created slice kubepods-besteffort-pod7a282b52_42ea_480c_9fa0_f3ad8d196a94.slice - libcontainer container kubepods-besteffort-pod7a282b52_42ea_480c_9fa0_f3ad8d196a94.slice. Apr 21 10:26:32.366797 containerd[1990]: time="2026-04-21T10:26:32.364899189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lw96w,Uid:b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb,Namespace:kube-system,Attempt:0,}" Apr 21 10:26:32.366797 containerd[1990]: time="2026-04-21T10:26:32.365604421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4g4wj,Uid:5025f349-a8c7-438c-8426-5f946767fac4,Namespace:kube-system,Attempt:0,}" Apr 21 10:26:32.378268 containerd[1990]: time="2026-04-21T10:26:32.377378621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f86cd7d4b-c5tr7,Uid:1970c874-1e64-4d7c-990c-50c703ddceae,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:32.380244 containerd[1990]: time="2026-04-21T10:26:32.378726187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sxk82,Uid:ee27081f-d3fb-48c1-8c12-8d86f1601923,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:32.425194 containerd[1990]: time="2026-04-21T10:26:32.423957523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f8bbbb7b-2qs25,Uid:7a282b52-42ea-480c-9fa0-f3ad8d196a94,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:32.425194 containerd[1990]: time="2026-04-21T10:26:32.424465683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-8gqsh,Uid:40d3332a-bed0-42c4-9601-3b65769379ef,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:32.425194 containerd[1990]: time="2026-04-21T10:26:32.424698656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-7d8jv,Uid:ce8a55a1-902f-4d92-8943-f6c346590495,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:32.826128 kubelet[3197]: I0421 10:26:32.824954 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-qzfzv" podStartSLOduration=1.895582041 podStartE2EDuration="28.809284785s" podCreationTimestamp="2026-04-21 10:26:04 +0000 UTC" firstStartedPulling="2026-04-21 10:26:04.809397459 +0000 UTC m=+20.569567997" lastFinishedPulling="2026-04-21 10:26:31.72310018 +0000 UTC m=+47.483270741" observedRunningTime="2026-04-21 10:26:32.807533761 +0000 UTC m=+48.567704376" watchObservedRunningTime="2026-04-21 10:26:32.809284785 +0000 UTC m=+48.569455343" Apr 21 10:26:33.160096 containerd[1990]: time="2026-04-21T10:26:33.159522292Z" level=error msg="Failed to destroy network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.167906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c-shm.mount: Deactivated successfully. Apr 21 10:26:33.196501 containerd[1990]: time="2026-04-21T10:26:33.196428825Z" level=error msg="encountered an error cleaning up failed sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.196686 containerd[1990]: time="2026-04-21T10:26:33.196538548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f8bbbb7b-2qs25,Uid:7a282b52-42ea-480c-9fa0-f3ad8d196a94,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.199219 containerd[1990]: time="2026-04-21T10:26:33.199163966Z" level=error msg="Failed to destroy network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.199795 containerd[1990]: time="2026-04-21T10:26:33.199626485Z" level=error msg="encountered an error cleaning up failed sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.199795 containerd[1990]: time="2026-04-21T10:26:33.199705874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-8gqsh,Uid:40d3332a-bed0-42c4-9601-3b65769379ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.225483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df-shm.mount: Deactivated successfully. Apr 21 10:26:33.236798 containerd[1990]: time="2026-04-21T10:26:33.235113784Z" level=error msg="Failed to destroy network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.247142 containerd[1990]: time="2026-04-21T10:26:33.246962899Z" level=error msg="encountered an error cleaning up failed sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.247142 containerd[1990]: time="2026-04-21T10:26:33.247053176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sxk82,Uid:ee27081f-d3fb-48c1-8c12-8d86f1601923,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.251861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a-shm.mount: Deactivated successfully. Apr 21 10:26:33.257523 kubelet[3197]: E0421 10:26:33.257474 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.259036 kubelet[3197]: E0421 10:26:33.258967 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.260464 kubelet[3197]: E0421 10:26:33.259892 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:33.260464 kubelet[3197]: E0421 10:26:33.259939 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-sxk82" Apr 21 10:26:33.260464 kubelet[3197]: E0421 10:26:33.260011 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-sxk82_calico-system(ee27081f-d3fb-48c1-8c12-8d86f1601923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-sxk82_calico-system(ee27081f-d3fb-48c1-8c12-8d86f1601923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-sxk82" podUID="ee27081f-d3fb-48c1-8c12-8d86f1601923" Apr 21 10:26:33.260745 kubelet[3197]: E0421 10:26:33.260142 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" Apr 21 10:26:33.260745 kubelet[3197]: E0421 10:26:33.260169 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" Apr 21 10:26:33.260745 kubelet[3197]: E0421 10:26:33.260233 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f8bbbb7b-2qs25_calico-system(7a282b52-42ea-480c-9fa0-f3ad8d196a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f8bbbb7b-2qs25_calico-system(7a282b52-42ea-480c-9fa0-f3ad8d196a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" podUID="7a282b52-42ea-480c-9fa0-f3ad8d196a94" Apr 21 10:26:33.261922 kubelet[3197]: E0421 10:26:33.260306 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.261922 kubelet[3197]: E0421 10:26:33.260332 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" Apr 21 10:26:33.261922 kubelet[3197]: E0421 10:26:33.260351 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" Apr 21 10:26:33.263384 kubelet[3197]: E0421 10:26:33.260406 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649f9d4c46-8gqsh_calico-system(40d3332a-bed0-42c4-9601-3b65769379ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649f9d4c46-8gqsh_calico-system(40d3332a-bed0-42c4-9601-3b65769379ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" podUID="40d3332a-bed0-42c4-9601-3b65769379ef" Apr 21 10:26:33.282297 containerd[1990]: time="2026-04-21T10:26:33.282063805Z" level=error msg="Failed to destroy network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.283986 containerd[1990]: time="2026-04-21T10:26:33.282779868Z" level=error msg="encountered an error cleaning up failed sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.283986 containerd[1990]: time="2026-04-21T10:26:33.282847198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4g4wj,Uid:5025f349-a8c7-438c-8426-5f946767fac4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.284109 kubelet[3197]: E0421 10:26:33.283114 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.284109 kubelet[3197]: E0421 10:26:33.283224 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4g4wj" Apr 21 10:26:33.284109 kubelet[3197]: E0421 10:26:33.283254 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4g4wj" Apr 21 10:26:33.285052 kubelet[3197]: E0421 10:26:33.283440 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-4g4wj_kube-system(5025f349-a8c7-438c-8426-5f946767fac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-4g4wj_kube-system(5025f349-a8c7-438c-8426-5f946767fac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-4g4wj" podUID="5025f349-a8c7-438c-8426-5f946767fac4" Apr 21 10:26:33.289564 containerd[1990]: time="2026-04-21T10:26:33.289515899Z" level=error msg="Failed to destroy network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.290233 containerd[1990]: time="2026-04-21T10:26:33.290065050Z" level=error msg="encountered an error cleaning up failed sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.290233 containerd[1990]: time="2026-04-21T10:26:33.290131803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f86cd7d4b-c5tr7,Uid:1970c874-1e64-4d7c-990c-50c703ddceae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.292966 kubelet[3197]: E0421 10:26:33.291436 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.292966 kubelet[3197]: E0421 10:26:33.291502 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:33.292966 kubelet[3197]: E0421 10:26:33.291526 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f86cd7d4b-c5tr7" Apr 21 10:26:33.293131 kubelet[3197]: E0421 10:26:33.291586 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f86cd7d4b-c5tr7_calico-system(1970c874-1e64-4d7c-990c-50c703ddceae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f86cd7d4b-c5tr7_calico-system(1970c874-1e64-4d7c-990c-50c703ddceae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f86cd7d4b-c5tr7" podUID="1970c874-1e64-4d7c-990c-50c703ddceae" Apr 21 10:26:33.296014 containerd[1990]: time="2026-04-21T10:26:33.295966973Z" level=error msg="Failed to destroy network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.296525 containerd[1990]: time="2026-04-21T10:26:33.296411892Z" level=error msg="encountered an error cleaning up failed sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.296621 containerd[1990]: time="2026-04-21T10:26:33.296519946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-7d8jv,Uid:ce8a55a1-902f-4d92-8943-f6c346590495,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.297128 kubelet[3197]: E0421 10:26:33.296952 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.297128 kubelet[3197]: E0421 10:26:33.297015 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" Apr 21 10:26:33.297128 kubelet[3197]: E0421 10:26:33.297040 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" Apr 21 10:26:33.297290 kubelet[3197]: E0421 10:26:33.297108 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649f9d4c46-7d8jv_calico-system(ce8a55a1-902f-4d92-8943-f6c346590495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649f9d4c46-7d8jv_calico-system(ce8a55a1-902f-4d92-8943-f6c346590495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" podUID="ce8a55a1-902f-4d92-8943-f6c346590495" Apr 21 10:26:33.301196 containerd[1990]: time="2026-04-21T10:26:33.301150257Z" level=error msg="Failed to destroy network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.301547 containerd[1990]: time="2026-04-21T10:26:33.301510687Z" level=error msg="encountered an error cleaning up failed sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.301632 containerd[1990]: time="2026-04-21T10:26:33.301573601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lw96w,Uid:b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.302027 kubelet[3197]: E0421 10:26:33.301839 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.302027 kubelet[3197]: E0421 10:26:33.301899 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lw96w" Apr 21 10:26:33.302027 kubelet[3197]: E0421 10:26:33.301917 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lw96w" Apr 21 10:26:33.302160 kubelet[3197]: E0421 10:26:33.301975 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-lw96w_kube-system(b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-lw96w_kube-system(b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-lw96w" podUID="b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb" Apr 21 10:26:33.454939 systemd[1]: Created slice kubepods-besteffort-pod9ef1622f_119b_4965_be74_eb6954ebbd5e.slice - libcontainer container kubepods-besteffort-pod9ef1622f_119b_4965_be74_eb6954ebbd5e.slice. Apr 21 10:26:33.469778 containerd[1990]: time="2026-04-21T10:26:33.469593447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwns8,Uid:9ef1622f-119b-4965-be74-eb6954ebbd5e,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:33.639988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be-shm.mount: Deactivated successfully. Apr 21 10:26:33.640333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0-shm.mount: Deactivated successfully. Apr 21 10:26:33.640437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c-shm.mount: Deactivated successfully. Apr 21 10:26:33.640744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89-shm.mount: Deactivated successfully. Apr 21 10:26:33.693330 containerd[1990]: time="2026-04-21T10:26:33.693268970Z" level=error msg="Failed to destroy network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.694806 containerd[1990]: time="2026-04-21T10:26:33.694177047Z" level=error msg="encountered an error cleaning up failed sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.694806 containerd[1990]: time="2026-04-21T10:26:33.694258089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwns8,Uid:9ef1622f-119b-4965-be74-eb6954ebbd5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.694982 kubelet[3197]: E0421 10:26:33.694512 3197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.694982 kubelet[3197]: E0421 10:26:33.694573 3197 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:33.694982 kubelet[3197]: E0421 10:26:33.694603 3197 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fwns8" Apr 21 10:26:33.695139 kubelet[3197]: E0421 10:26:33.694677 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fwns8_calico-system(9ef1622f-119b-4965-be74-eb6954ebbd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fwns8_calico-system(9ef1622f-119b-4965-be74-eb6954ebbd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:33.700252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600-shm.mount: Deactivated successfully. Apr 21 10:26:33.777689 containerd[1990]: time="2026-04-21T10:26:33.775654780Z" level=info msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" Apr 21 10:26:33.777689 containerd[1990]: time="2026-04-21T10:26:33.777517573Z" level=info msg="Ensure that sandbox 2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0 in task-service has been cleanup successfully" Apr 21 10:26:33.784973 kubelet[3197]: I0421 10:26:33.784826 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:33.785887 kubelet[3197]: I0421 10:26:33.785850 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:33.791107 kubelet[3197]: I0421 10:26:33.789197 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:33.791683 containerd[1990]: time="2026-04-21T10:26:33.791647513Z" level=info msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" Apr 21 10:26:33.792681 containerd[1990]: time="2026-04-21T10:26:33.792649977Z" level=info msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" Apr 21 10:26:33.793048 containerd[1990]: time="2026-04-21T10:26:33.793021699Z" level=info msg="Ensure that sandbox 1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df in task-service has been cleanup successfully" Apr 21 10:26:33.798778 kubelet[3197]: I0421 10:26:33.798737 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:33.799727 containerd[1990]: time="2026-04-21T10:26:33.799679059Z" level=info msg="Ensure that sandbox 8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c in task-service has been cleanup successfully" Apr 21 10:26:33.801864 containerd[1990]: time="2026-04-21T10:26:33.801831191Z" level=info msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" Apr 21 10:26:33.805596 containerd[1990]: time="2026-04-21T10:26:33.805558447Z" level=info msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" Apr 21 10:26:33.805694 kubelet[3197]: I0421 10:26:33.804796 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:33.807018 containerd[1990]: time="2026-04-21T10:26:33.806966496Z" level=info msg="Ensure that sandbox 8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89 in task-service has been cleanup successfully" Apr 21 10:26:33.807451 containerd[1990]: time="2026-04-21T10:26:33.807409621Z" level=info msg="Ensure that sandbox 15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c in task-service has been cleanup successfully" Apr 21 10:26:33.814275 kubelet[3197]: I0421 10:26:33.814241 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:33.816910 containerd[1990]: time="2026-04-21T10:26:33.816404571Z" level=info msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" Apr 21 10:26:33.816910 containerd[1990]: time="2026-04-21T10:26:33.816623170Z" level=info msg="Ensure that sandbox 70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600 in task-service has been cleanup successfully" Apr 21 10:26:33.819866 kubelet[3197]: I0421 10:26:33.819736 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:33.822358 containerd[1990]: time="2026-04-21T10:26:33.822322256Z" level=info msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" Apr 21 10:26:33.823042 containerd[1990]: time="2026-04-21T10:26:33.822739418Z" level=info msg="Ensure that sandbox ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be in task-service has been cleanup successfully" Apr 21 10:26:33.856967 kubelet[3197]: I0421 10:26:33.856534 3197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:33.860478 containerd[1990]: time="2026-04-21T10:26:33.860284015Z" level=info msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" Apr 21 10:26:33.862541 containerd[1990]: time="2026-04-21T10:26:33.862174716Z" level=info msg="Ensure that sandbox 46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a in task-service has been cleanup successfully" Apr 21 10:26:33.974971 containerd[1990]: time="2026-04-21T10:26:33.974909606Z" level=error msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" failed" error="failed to destroy network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.975605 kubelet[3197]: E0421 10:26:33.975211 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:33.975605 kubelet[3197]: E0421 10:26:33.975275 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0"} Apr 21 10:26:33.975605 kubelet[3197]: E0421 10:26:33.975358 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce8a55a1-902f-4d92-8943-f6c346590495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:33.975605 kubelet[3197]: E0421 10:26:33.975405 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce8a55a1-902f-4d92-8943-f6c346590495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" podUID="ce8a55a1-902f-4d92-8943-f6c346590495" Apr 21 10:26:33.986724 containerd[1990]: time="2026-04-21T10:26:33.986664156Z" level=error msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" failed" error="failed to destroy network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:33.987401 kubelet[3197]: E0421 10:26:33.987178 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:33.987401 kubelet[3197]: E0421 10:26:33.987239 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600"} Apr 21 10:26:33.987401 kubelet[3197]: E0421 10:26:33.987282 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ef1622f-119b-4965-be74-eb6954ebbd5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:33.987401 kubelet[3197]: E0421 10:26:33.987330 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ef1622f-119b-4965-be74-eb6954ebbd5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fwns8" podUID="9ef1622f-119b-4965-be74-eb6954ebbd5e" Apr 21 10:26:34.023197 containerd[1990]: time="2026-04-21T10:26:34.022630017Z" level=error msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" failed" error="failed to destroy network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.023357 kubelet[3197]: E0421 10:26:34.022947 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:34.023357 kubelet[3197]: E0421 10:26:34.023012 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89"} Apr 21 10:26:34.024417 kubelet[3197]: E0421 10:26:34.024227 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5025f349-a8c7-438c-8426-5f946767fac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.024417 kubelet[3197]: E0421 10:26:34.024288 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5025f349-a8c7-438c-8426-5f946767fac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-4g4wj" podUID="5025f349-a8c7-438c-8426-5f946767fac4" Apr 21 10:26:34.056015 containerd[1990]: time="2026-04-21T10:26:34.055888577Z" level=error msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" failed" error="failed to destroy network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.056695 kubelet[3197]: E0421 10:26:34.056497 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:34.056695 kubelet[3197]: E0421 10:26:34.056552 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df"} Apr 21 10:26:34.056695 kubelet[3197]: E0421 10:26:34.056603 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40d3332a-bed0-42c4-9601-3b65769379ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.056695 kubelet[3197]: E0421 10:26:34.056640 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40d3332a-bed0-42c4-9601-3b65769379ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" podUID="40d3332a-bed0-42c4-9601-3b65769379ef" Apr 21 10:26:34.057600 containerd[1990]: time="2026-04-21T10:26:34.057210028Z" level=error msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" failed" error="failed to destroy network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.057687 kubelet[3197]: E0421 10:26:34.057459 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:34.057687 kubelet[3197]: E0421 10:26:34.057497 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c"} Apr 21 10:26:34.057687 kubelet[3197]: E0421 10:26:34.057529 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a282b52-42ea-480c-9fa0-f3ad8d196a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.057687 kubelet[3197]: E0421 10:26:34.057563 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a282b52-42ea-480c-9fa0-f3ad8d196a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" podUID="7a282b52-42ea-480c-9fa0-f3ad8d196a94" Apr 21 10:26:34.070572 containerd[1990]: time="2026-04-21T10:26:34.069981571Z" level=error msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" failed" error="failed to destroy network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.071075 kubelet[3197]: E0421 10:26:34.071016 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:34.071299 kubelet[3197]: E0421 10:26:34.071261 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c"} Apr 21 10:26:34.071883 kubelet[3197]: E0421 10:26:34.071383 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.071883 kubelet[3197]: E0421 10:26:34.071439 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-lw96w" podUID="b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb" Apr 21 10:26:34.080338 containerd[1990]: time="2026-04-21T10:26:34.080112714Z" level=error msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" failed" error="failed to destroy network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.081081 kubelet[3197]: E0421 10:26:34.080667 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:34.081081 kubelet[3197]: E0421 10:26:34.080723 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a"} Apr 21 10:26:34.081081 kubelet[3197]: E0421 10:26:34.081031 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee27081f-d3fb-48c1-8c12-8d86f1601923\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.081829 kubelet[3197]: E0421 10:26:34.081383 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee27081f-d3fb-48c1-8c12-8d86f1601923\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-sxk82" podUID="ee27081f-d3fb-48c1-8c12-8d86f1601923" Apr 21 10:26:34.083991 containerd[1990]: time="2026-04-21T10:26:34.083947746Z" level=error msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" failed" error="failed to destroy network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:26:34.084402 kubelet[3197]: E0421 10:26:34.084352 3197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:34.084493 kubelet[3197]: E0421 10:26:34.084413 3197 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be"} Apr 21 10:26:34.084493 kubelet[3197]: E0421 10:26:34.084450 3197 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1970c874-1e64-4d7c-990c-50c703ddceae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:26:34.084605 kubelet[3197]: E0421 10:26:34.084493 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1970c874-1e64-4d7c-990c-50c703ddceae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f86cd7d4b-c5tr7" podUID="1970c874-1e64-4d7c-990c-50c703ddceae" Apr 21 10:26:34.888983 containerd[1990]: time="2026-04-21T10:26:34.888592583Z" level=info msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.049 [INFO][4687] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.050 [INFO][4687] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" iface="eth0" netns="/var/run/netns/cni-8f336491-6a0f-18cd-9d75-c2a3cf292c83" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.050 [INFO][4687] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" iface="eth0" netns="/var/run/netns/cni-8f336491-6a0f-18cd-9d75-c2a3cf292c83" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.051 [INFO][4687] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" iface="eth0" netns="/var/run/netns/cni-8f336491-6a0f-18cd-9d75-c2a3cf292c83" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.051 [INFO][4687] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.051 [INFO][4687] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.166 [INFO][4710] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.167 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.167 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.177 [WARNING][4710] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.177 [INFO][4710] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.179 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:35.184995 containerd[1990]: 2026-04-21 10:26:35.182 [INFO][4687] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:35.186656 containerd[1990]: time="2026-04-21T10:26:35.186514565Z" level=info msg="TearDown network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" successfully" Apr 21 10:26:35.186656 containerd[1990]: time="2026-04-21T10:26:35.186555109Z" level=info msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" returns successfully" Apr 21 10:26:35.189825 systemd[1]: run-netns-cni\x2d8f336491\x2d6a0f\x2d18cd\x2d9d75\x2dc2a3cf292c83.mount: Deactivated successfully. Apr 21 10:26:35.254990 kubelet[3197]: I0421 10:26:35.254915 3197 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-ca-bundle\") pod \"1970c874-1e64-4d7c-990c-50c703ddceae\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " Apr 21 10:26:35.254990 kubelet[3197]: I0421 10:26:35.254990 3197 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-backend-key-pair\") pod \"1970c874-1e64-4d7c-990c-50c703ddceae\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " Apr 21 10:26:35.255510 kubelet[3197]: I0421 10:26:35.255044 3197 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-nginx-config\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-nginx-config\") pod \"1970c874-1e64-4d7c-990c-50c703ddceae\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " Apr 21 10:26:35.255510 kubelet[3197]: I0421 10:26:35.255088 3197 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1970c874-1e64-4d7c-990c-50c703ddceae-kube-api-access-g5c26\" (UniqueName: \"kubernetes.io/projected/1970c874-1e64-4d7c-990c-50c703ddceae-kube-api-access-g5c26\") pod \"1970c874-1e64-4d7c-990c-50c703ddceae\" (UID: \"1970c874-1e64-4d7c-990c-50c703ddceae\") " Apr 21 10:26:35.263440 systemd[1]: var-lib-kubelet-pods-1970c874\x2d1e64\x2d4d7c\x2d990c\x2d50c703ddceae-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:26:35.267925 kubelet[3197]: I0421 10:26:35.265389 3197 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-backend-key-pair" pod "1970c874-1e64-4d7c-990c-50c703ddceae" (UID: "1970c874-1e64-4d7c-990c-50c703ddceae"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:26:35.268584 kubelet[3197]: I0421 10:26:35.268269 3197 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1970c874-1e64-4d7c-990c-50c703ddceae-kube-api-access-g5c26" pod "1970c874-1e64-4d7c-990c-50c703ddceae" (UID: "1970c874-1e64-4d7c-990c-50c703ddceae"). InnerVolumeSpecName "kube-api-access-g5c26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:26:35.270230 systemd[1]: var-lib-kubelet-pods-1970c874\x2d1e64\x2d4d7c\x2d990c\x2d50c703ddceae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5c26.mount: Deactivated successfully. Apr 21 10:26:35.273322 kubelet[3197]: I0421 10:26:35.273251 3197 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-nginx-config" pod "1970c874-1e64-4d7c-990c-50c703ddceae" (UID: "1970c874-1e64-4d7c-990c-50c703ddceae"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:26:35.273322 kubelet[3197]: I0421 10:26:35.273293 3197 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-ca-bundle" pod "1970c874-1e64-4d7c-990c-50c703ddceae" (UID: "1970c874-1e64-4d7c-990c-50c703ddceae"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:26:35.355700 kubelet[3197]: I0421 10:26:35.355643 3197 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-ca-bundle\") on node \"ip-172-31-16-209\" DevicePath \"\"" Apr 21 10:26:35.355700 kubelet[3197]: I0421 10:26:35.355679 3197 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1970c874-1e64-4d7c-990c-50c703ddceae-whisker-backend-key-pair\") on node \"ip-172-31-16-209\" DevicePath \"\"" Apr 21 10:26:35.355700 kubelet[3197]: I0421 10:26:35.355698 3197 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1970c874-1e64-4d7c-990c-50c703ddceae-nginx-config\") on node \"ip-172-31-16-209\" DevicePath \"\"" Apr 21 10:26:35.355700 kubelet[3197]: I0421 10:26:35.355711 3197 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g5c26\" (UniqueName: \"kubernetes.io/projected/1970c874-1e64-4d7c-990c-50c703ddceae-kube-api-access-g5c26\") on node \"ip-172-31-16-209\" DevicePath \"\"" Apr 21 10:26:35.987367 systemd[1]: Removed slice kubepods-besteffort-pod1970c874_1e64_4d7c_990c_50c703ddceae.slice - libcontainer container kubepods-besteffort-pod1970c874_1e64_4d7c_990c_50c703ddceae.slice. Apr 21 10:26:36.164513 systemd[1]: Created slice kubepods-besteffort-podf6e1c26b_460c_4baf_9e9f_2602a5dcfc61.slice - libcontainer container kubepods-besteffort-podf6e1c26b_460c_4baf_9e9f_2602a5dcfc61.slice. Apr 21 10:26:36.294029 kubelet[3197]: I0421 10:26:36.293913 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1c26b-460c-4baf-9e9f-2602a5dcfc61-whisker-ca-bundle\") pod \"whisker-8447dc59-fp7fz\" (UID: \"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61\") " pod="calico-system/whisker-8447dc59-fp7fz" Apr 21 10:26:36.294029 kubelet[3197]: I0421 10:26:36.293975 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6e1c26b-460c-4baf-9e9f-2602a5dcfc61-whisker-backend-key-pair\") pod \"whisker-8447dc59-fp7fz\" (UID: \"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61\") " pod="calico-system/whisker-8447dc59-fp7fz" Apr 21 10:26:36.294029 kubelet[3197]: I0421 10:26:36.294016 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f6e1c26b-460c-4baf-9e9f-2602a5dcfc61-nginx-config\") pod \"whisker-8447dc59-fp7fz\" (UID: \"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61\") " pod="calico-system/whisker-8447dc59-fp7fz" Apr 21 10:26:36.295945 kubelet[3197]: I0421 10:26:36.294058 3197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvmc\" (UniqueName: \"kubernetes.io/projected/f6e1c26b-460c-4baf-9e9f-2602a5dcfc61-kube-api-access-gwvmc\") pod \"whisker-8447dc59-fp7fz\" (UID: \"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61\") " pod="calico-system/whisker-8447dc59-fp7fz" Apr 21 10:26:36.488020 kubelet[3197]: I0421 10:26:36.487982 3197 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1970c874-1e64-4d7c-990c-50c703ddceae" path="/var/lib/kubelet/pods/1970c874-1e64-4d7c-990c-50c703ddceae/volumes" Apr 21 10:26:36.489802 kernel: calico-node[4807]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:26:36.531148 containerd[1990]: time="2026-04-21T10:26:36.531088476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8447dc59-fp7fz,Uid:f6e1c26b-460c-4baf-9e9f-2602a5dcfc61,Namespace:calico-system,Attempt:0,}" Apr 21 10:26:36.985718 systemd-networkd[1896]: calie2bab253eb3: Link UP Apr 21 10:26:36.988233 systemd-networkd[1896]: calie2bab253eb3: Gained carrier Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.704 [INFO][4848] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0 whisker-8447dc59- calico-system f6e1c26b-460c-4baf-9e9f-2602a5dcfc61 969 0 2026-04-21 10:26:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8447dc59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-209 whisker-8447dc59-fp7fz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2bab253eb3 [] [] }} ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.704 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.797 [INFO][4860] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" HandleID="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Workload="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.818 [INFO][4860] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" HandleID="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Workload="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ffb40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"whisker-8447dc59-fp7fz", "timestamp":"2026-04-21 10:26:36.797704076 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005926e0)} Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.818 [INFO][4860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.819 [INFO][4860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.819 [INFO][4860] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.825 [INFO][4860] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.835 [INFO][4860] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.842 [INFO][4860] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.846 [INFO][4860] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.850 [INFO][4860] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.850 [INFO][4860] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.853 [INFO][4860] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8 Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.876 [INFO][4860] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.909 [INFO][4860] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.1/26] block=192.168.77.0/26 handle="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.909 [INFO][4860] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.1/26] handle="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" host="ip-172-31-16-209" Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.909 [INFO][4860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:37.014751 containerd[1990]: 2026-04-21 10:26:36.909 [INFO][4860] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.1/26] IPv6=[] ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" HandleID="k8s-pod-network.7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Workload="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:36.914 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0", GenerateName:"whisker-8447dc59-", Namespace:"calico-system", SelfLink:"", UID:"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8447dc59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"whisker-8447dc59-fp7fz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2bab253eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:36.914 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.1/32] ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:36.914 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2bab253eb3 ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:36.972 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:36.972 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0", GenerateName:"whisker-8447dc59-", Namespace:"calico-system", SelfLink:"", UID:"f6e1c26b-460c-4baf-9e9f-2602a5dcfc61", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8447dc59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8", Pod:"whisker-8447dc59-fp7fz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2bab253eb3", MAC:"6a:85:29:28:38:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:37.021634 containerd[1990]: 2026-04-21 10:26:37.009 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8" Namespace="calico-system" Pod="whisker-8447dc59-fp7fz" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--8447dc59--fp7fz-eth0" Apr 21 10:26:37.039165 (udev-worker)[4872]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:26:37.498245 containerd[1990]: time="2026-04-21T10:26:37.491078027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:37.498245 containerd[1990]: time="2026-04-21T10:26:37.498031255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:37.498245 containerd[1990]: time="2026-04-21T10:26:37.498054656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:37.498245 containerd[1990]: time="2026-04-21T10:26:37.498178889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:37.597724 systemd[1]: Started cri-containerd-7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8.scope - libcontainer container 7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8. Apr 21 10:26:37.702600 (udev-worker)[4871]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:26:37.719411 systemd-networkd[1896]: vxlan.calico: Link UP Apr 21 10:26:37.719428 systemd-networkd[1896]: vxlan.calico: Gained carrier Apr 21 10:26:37.854904 containerd[1990]: time="2026-04-21T10:26:37.850396922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8447dc59-fp7fz,Uid:f6e1c26b-460c-4baf-9e9f-2602a5dcfc61,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8\"" Apr 21 10:26:37.878291 containerd[1990]: time="2026-04-21T10:26:37.878246479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:26:38.790691 systemd-networkd[1896]: calie2bab253eb3: Gained IPv6LL Apr 21 10:26:39.596365 containerd[1990]: time="2026-04-21T10:26:39.596300467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:39.597581 containerd[1990]: time="2026-04-21T10:26:39.597514817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:26:39.613188 containerd[1990]: time="2026-04-21T10:26:39.613117354Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:39.616203 containerd[1990]: time="2026-04-21T10:26:39.616133336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:39.617116 containerd[1990]: time="2026-04-21T10:26:39.617075714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.738784865s" Apr 21 10:26:39.617116 containerd[1990]: time="2026-04-21T10:26:39.617113866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:26:39.628209 containerd[1990]: time="2026-04-21T10:26:39.628003329Z" level=info msg="CreateContainer within sandbox \"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:26:39.659078 containerd[1990]: time="2026-04-21T10:26:39.658413715Z" level=info msg="CreateContainer within sandbox \"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e\"" Apr 21 10:26:39.660647 containerd[1990]: time="2026-04-21T10:26:39.660608690Z" level=info msg="StartContainer for \"ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e\"" Apr 21 10:26:39.704398 systemd[1]: run-containerd-runc-k8s.io-ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e-runc.thSHMo.mount: Deactivated successfully. Apr 21 10:26:39.713330 systemd[1]: Started cri-containerd-ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e.scope - libcontainer container ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e. Apr 21 10:26:39.750018 systemd-networkd[1896]: vxlan.calico: Gained IPv6LL Apr 21 10:26:39.818178 containerd[1990]: time="2026-04-21T10:26:39.818127714Z" level=info msg="StartContainer for \"ccab59d840d9294b1674567aa793159d6a5854d12a8cc7c6c1e22fc70e072c1e\" returns successfully" Apr 21 10:26:39.822125 containerd[1990]: time="2026-04-21T10:26:39.822062040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:26:41.894581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219224119.mount: Deactivated successfully. Apr 21 10:26:41.938623 containerd[1990]: time="2026-04-21T10:26:41.937972809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:41.939315 containerd[1990]: time="2026-04-21T10:26:41.939261411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:26:41.940737 containerd[1990]: time="2026-04-21T10:26:41.940681850Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:41.943391 containerd[1990]: time="2026-04-21T10:26:41.943355065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:41.944830 containerd[1990]: time="2026-04-21T10:26:41.944314366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.122184879s" Apr 21 10:26:41.944830 containerd[1990]: time="2026-04-21T10:26:41.944358776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:26:41.951041 containerd[1990]: time="2026-04-21T10:26:41.950981446Z" level=info msg="CreateContainer within sandbox \"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:26:41.970189 containerd[1990]: time="2026-04-21T10:26:41.970138511Z" level=info msg="CreateContainer within sandbox \"7f9668b26511eb583ebcabf34d65712bd8e8c27d15a8a6476ef858ced19f5ff8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4c0668fd7e8eef8612c28fc51baa6260512282b69e5ba86119ca2952a3de047e\"" Apr 21 10:26:41.971709 containerd[1990]: time="2026-04-21T10:26:41.971638419Z" level=info msg="StartContainer for \"4c0668fd7e8eef8612c28fc51baa6260512282b69e5ba86119ca2952a3de047e\"" Apr 21 10:26:41.976413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260717286.mount: Deactivated successfully. Apr 21 10:26:42.014031 systemd[1]: Started cri-containerd-4c0668fd7e8eef8612c28fc51baa6260512282b69e5ba86119ca2952a3de047e.scope - libcontainer container 4c0668fd7e8eef8612c28fc51baa6260512282b69e5ba86119ca2952a3de047e. Apr 21 10:26:42.065749 containerd[1990]: time="2026-04-21T10:26:42.065419217Z" level=info msg="StartContainer for \"4c0668fd7e8eef8612c28fc51baa6260512282b69e5ba86119ca2952a3de047e\" returns successfully" Apr 21 10:26:42.384683 ntpd[1958]: Listen normally on 8 vxlan.calico 192.168.77.0:123 Apr 21 10:26:42.384821 ntpd[1958]: Listen normally on 9 calie2bab253eb3 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:26:42.386299 ntpd[1958]: 21 Apr 10:26:42 ntpd[1958]: Listen normally on 8 vxlan.calico 192.168.77.0:123 Apr 21 10:26:42.386299 ntpd[1958]: 21 Apr 10:26:42 ntpd[1958]: Listen normally on 9 calie2bab253eb3 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:26:42.386299 ntpd[1958]: 21 Apr 10:26:42 ntpd[1958]: Listen normally on 10 vxlan.calico [fe80::64f9:85ff:fe18:805c%5]:123 Apr 21 10:26:42.384884 ntpd[1958]: Listen normally on 10 vxlan.calico [fe80::64f9:85ff:fe18:805c%5]:123 Apr 21 10:26:43.903292 systemd[1]: Started sshd@7-172.31.16.209:22-50.85.169.122:46418.service - OpenSSH per-connection server daemon (50.85.169.122:46418). Apr 21 10:26:44.440092 containerd[1990]: time="2026-04-21T10:26:44.439838304Z" level=info msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" Apr 21 10:26:44.508089 containerd[1990]: time="2026-04-21T10:26:44.506719075Z" level=info msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" Apr 21 10:26:44.508089 containerd[1990]: time="2026-04-21T10:26:44.507645234Z" level=info msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" Apr 21 10:26:44.618638 kubelet[3197]: I0421 10:26:44.618062 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-8447dc59-fp7fz" podStartSLOduration=4.539500945 podStartE2EDuration="8.61802966s" podCreationTimestamp="2026-04-21 10:26:36 +0000 UTC" firstStartedPulling="2026-04-21 10:26:37.867062359 +0000 UTC m=+53.627232909" lastFinishedPulling="2026-04-21 10:26:41.945591087 +0000 UTC m=+57.705761624" observedRunningTime="2026-04-21 10:26:42.957793488 +0000 UTC m=+58.717964096" watchObservedRunningTime="2026-04-21 10:26:44.61802966 +0000 UTC m=+60.378200219" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.634 [WARNING][5144] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.634 [INFO][5144] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.634 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" iface="eth0" netns="" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.634 [INFO][5144] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.634 [INFO][5144] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.714 [INFO][5169] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.714 [INFO][5169] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.714 [INFO][5169] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.724 [WARNING][5169] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.724 [INFO][5169] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.726 [INFO][5169] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:44.737159 containerd[1990]: 2026-04-21 10:26:44.729 [INFO][5144] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:44.759284 containerd[1990]: time="2026-04-21T10:26:44.758998924Z" level=info msg="TearDown network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" successfully" Apr 21 10:26:44.759284 containerd[1990]: time="2026-04-21T10:26:44.759045927Z" level=info msg="StopPodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" returns successfully" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" iface="eth0" netns="/var/run/netns/cni-4f2d34d7-26d1-487b-9560-b98474b6f43d" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" iface="eth0" netns="/var/run/netns/cni-4f2d34d7-26d1-487b-9560-b98474b6f43d" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" iface="eth0" netns="/var/run/netns/cni-4f2d34d7-26d1-487b-9560-b98474b6f43d" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.673 [INFO][5146] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.720 [INFO][5174] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.720 [INFO][5174] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.726 [INFO][5174] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.741 [WARNING][5174] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.741 [INFO][5174] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.745 [INFO][5174] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:44.762539 containerd[1990]: 2026-04-21 10:26:44.751 [INFO][5146] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:26:44.767484 containerd[1990]: time="2026-04-21T10:26:44.762725463Z" level=info msg="TearDown network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" successfully" Apr 21 10:26:44.767484 containerd[1990]: time="2026-04-21T10:26:44.762781449Z" level=info msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" returns successfully" Apr 21 10:26:44.770677 systemd[1]: run-netns-cni\x2d4f2d34d7\x2d26d1\x2d487b\x2d9560\x2db98474b6f43d.mount: Deactivated successfully. Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.620 [INFO][5145] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.621 [INFO][5145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" iface="eth0" netns="/var/run/netns/cni-c21f97af-9186-7066-5533-d3c1d2d3ef4c" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.622 [INFO][5145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" iface="eth0" netns="/var/run/netns/cni-c21f97af-9186-7066-5533-d3c1d2d3ef4c" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.622 [INFO][5145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" iface="eth0" netns="/var/run/netns/cni-c21f97af-9186-7066-5533-d3c1d2d3ef4c" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.622 [INFO][5145] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.622 [INFO][5145] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.723 [INFO][5162] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.723 [INFO][5162] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.745 [INFO][5162] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.758 [WARNING][5162] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.758 [INFO][5162] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.761 [INFO][5162] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:44.774834 containerd[1990]: 2026-04-21 10:26:44.766 [INFO][5145] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:26:44.779464 containerd[1990]: time="2026-04-21T10:26:44.775447730Z" level=info msg="TearDown network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" successfully" Apr 21 10:26:44.779464 containerd[1990]: time="2026-04-21T10:26:44.777810797Z" level=info msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" returns successfully" Apr 21 10:26:44.779881 systemd[1]: run-netns-cni\x2dc21f97af\x2d9186\x2d7066\x2d5533\x2dd3c1d2d3ef4c.mount: Deactivated successfully. Apr 21 10:26:44.780525 containerd[1990]: time="2026-04-21T10:26:44.779911493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-8gqsh,Uid:40d3332a-bed0-42c4-9601-3b65769379ef,Namespace:calico-system,Attempt:1,}" Apr 21 10:26:44.785618 containerd[1990]: time="2026-04-21T10:26:44.785579338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4g4wj,Uid:5025f349-a8c7-438c-8426-5f946767fac4,Namespace:kube-system,Attempt:1,}" Apr 21 10:26:44.795915 containerd[1990]: time="2026-04-21T10:26:44.795711510Z" level=info msg="RemovePodSandbox for \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" Apr 21 10:26:44.795915 containerd[1990]: time="2026-04-21T10:26:44.795854013Z" level=info msg="Forcibly stopping sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\"" Apr 21 10:26:44.971457 sshd[5114]: Accepted publickey for core from 50.85.169.122 port 46418 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:26:44.977390 sshd[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:44.988687 systemd-logind[1964]: New session 8 of user core. Apr 21 10:26:44.996999 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:44.950 [WARNING][5202] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" WorkloadEndpoint="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:44.951 [INFO][5202] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:44.951 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" iface="eth0" netns="" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:44.951 [INFO][5202] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:44.951 [INFO][5202] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.028 [INFO][5227] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.028 [INFO][5227] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.028 [INFO][5227] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.050 [WARNING][5227] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.050 [INFO][5227] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" HandleID="k8s-pod-network.ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Workload="ip--172--31--16--209-k8s-whisker--7f86cd7d4b--c5tr7-eth0" Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.057 [INFO][5227] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:45.062988 containerd[1990]: 2026-04-21 10:26:45.059 [INFO][5202] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be" Apr 21 10:26:45.065055 containerd[1990]: time="2026-04-21T10:26:45.064528972Z" level=info msg="TearDown network for sandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" successfully" Apr 21 10:26:45.072352 containerd[1990]: time="2026-04-21T10:26:45.071919004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:26:45.072352 containerd[1990]: time="2026-04-21T10:26:45.072011906Z" level=info msg="RemovePodSandbox \"ce836bbb8afdc0ea7ec7e329e4aaf70393fd2adc5ad08e38843df2649e7961be\" returns successfully" Apr 21 10:26:45.146626 systemd-networkd[1896]: calie87afa0ff3a: Link UP Apr 21 10:26:45.146888 systemd-networkd[1896]: calie87afa0ff3a: Gained carrier Apr 21 10:26:45.153658 (udev-worker)[5244]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:44.913 [INFO][5188] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0 calico-apiserver-649f9d4c46- calico-system 40d3332a-bed0-42c4-9601-3b65769379ef 1039 0 2026-04-21 10:26:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649f9d4c46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-209 calico-apiserver-649f9d4c46-8gqsh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie87afa0ff3a [] [] }} ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:44.915 [INFO][5188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.035 [INFO][5222] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" HandleID="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.053 [INFO][5222] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" HandleID="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000402160), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"calico-apiserver-649f9d4c46-8gqsh", "timestamp":"2026-04-21 10:26:45.035402859 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c5600)} Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.054 [INFO][5222] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.056 [INFO][5222] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.056 [INFO][5222] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.070 [INFO][5222] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.083 [INFO][5222] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.101 [INFO][5222] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.108 [INFO][5222] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.117 [INFO][5222] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.117 [INFO][5222] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.120 [INFO][5222] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38 Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.127 [INFO][5222] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.138 [INFO][5222] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.2/26] block=192.168.77.0/26 handle="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.138 [INFO][5222] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.2/26] handle="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" host="ip-172-31-16-209" Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.138 [INFO][5222] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:45.179626 containerd[1990]: 2026-04-21 10:26:45.138 [INFO][5222] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.2/26] IPv6=[] ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" HandleID="k8s-pod-network.5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.143 [INFO][5188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"40d3332a-bed0-42c4-9601-3b65769379ef", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-apiserver-649f9d4c46-8gqsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie87afa0ff3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.143 [INFO][5188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.2/32] ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.143 [INFO][5188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87afa0ff3a ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.148 [INFO][5188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.149 [INFO][5188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"40d3332a-bed0-42c4-9601-3b65769379ef", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38", Pod:"calico-apiserver-649f9d4c46-8gqsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie87afa0ff3a", MAC:"26:cb:4b:8e:6d:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.180721 containerd[1990]: 2026-04-21 10:26:45.172 [INFO][5188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-8gqsh" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:26:45.240846 containerd[1990]: time="2026-04-21T10:26:45.239799818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:45.240846 containerd[1990]: time="2026-04-21T10:26:45.239883439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:45.241206 containerd[1990]: time="2026-04-21T10:26:45.241057120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:45.245694 containerd[1990]: time="2026-04-21T10:26:45.242953391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:45.257481 systemd-networkd[1896]: calibe80a9a40e7: Link UP Apr 21 10:26:45.263604 (udev-worker)[5246]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:26:45.264948 systemd-networkd[1896]: calibe80a9a40e7: Gained carrier Apr 21 10:26:45.290843 systemd[1]: Started cri-containerd-5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38.scope - libcontainer container 5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38. Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:44.949 [INFO][5206] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0 coredns-7d764666f9- kube-system 5025f349-a8c7-438c-8426-5f946767fac4 1038 0 2026-04-21 10:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-209 coredns-7d764666f9-4g4wj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe80a9a40e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:44.949 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.059 [INFO][5232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" HandleID="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.081 [INFO][5232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" HandleID="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c97b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-209", "pod":"coredns-7d764666f9-4g4wj", "timestamp":"2026-04-21 10:26:45.059590262 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000302420)} Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.081 [INFO][5232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.139 [INFO][5232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.139 [INFO][5232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.169 [INFO][5232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.196 [INFO][5232] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.204 [INFO][5232] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.209 [INFO][5232] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.214 [INFO][5232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.214 [INFO][5232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.222 [INFO][5232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98 Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.230 [INFO][5232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.246 [INFO][5232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.3/26] block=192.168.77.0/26 handle="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.246 [INFO][5232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.3/26] handle="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" host="ip-172-31-16-209" Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.247 [INFO][5232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:45.308115 containerd[1990]: 2026-04-21 10:26:45.247 [INFO][5232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.3/26] IPv6=[] ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" HandleID="k8s-pod-network.50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.250 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5025f349-a8c7-438c-8426-5f946767fac4", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"coredns-7d764666f9-4g4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe80a9a40e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.250 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.3/32] ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.251 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe80a9a40e7 ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.258 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.259 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5025f349-a8c7-438c-8426-5f946767fac4", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98", Pod:"coredns-7d764666f9-4g4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe80a9a40e7", MAC:"3e:24:30:63:52:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.309280 containerd[1990]: 2026-04-21 10:26:45.298 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98" Namespace="kube-system" Pod="coredns-7d764666f9-4g4wj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:26:45.349863 containerd[1990]: time="2026-04-21T10:26:45.347915044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:45.349863 containerd[1990]: time="2026-04-21T10:26:45.347987522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:45.349863 containerd[1990]: time="2026-04-21T10:26:45.348027836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:45.349863 containerd[1990]: time="2026-04-21T10:26:45.348143895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:45.383178 systemd[1]: Started cri-containerd-50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98.scope - libcontainer container 50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98. Apr 21 10:26:45.424289 containerd[1990]: time="2026-04-21T10:26:45.424219695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-8gqsh,Uid:40d3332a-bed0-42c4-9601-3b65769379ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38\"" Apr 21 10:26:45.427687 containerd[1990]: time="2026-04-21T10:26:45.427379435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:26:45.453435 containerd[1990]: time="2026-04-21T10:26:45.452583654Z" level=info msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" Apr 21 10:26:45.497422 containerd[1990]: time="2026-04-21T10:26:45.496506703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4g4wj,Uid:5025f349-a8c7-438c-8426-5f946767fac4,Namespace:kube-system,Attempt:1,} returns sandbox id \"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98\"" Apr 21 10:26:45.521464 containerd[1990]: time="2026-04-21T10:26:45.520540594Z" level=info msg="CreateContainer within sandbox \"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:26:45.587632 containerd[1990]: time="2026-04-21T10:26:45.586599342Z" level=info msg="CreateContainer within sandbox \"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18ed2de8d59d295dbb4930d04191e43262af0b2ef460075cd9fcd13640d8d94d\"" Apr 21 10:26:45.592473 containerd[1990]: time="2026-04-21T10:26:45.589611363Z" level=info msg="StartContainer for \"18ed2de8d59d295dbb4930d04191e43262af0b2ef460075cd9fcd13640d8d94d\"" Apr 21 10:26:45.652057 systemd[1]: Started cri-containerd-18ed2de8d59d295dbb4930d04191e43262af0b2ef460075cd9fcd13640d8d94d.scope - libcontainer container 18ed2de8d59d295dbb4930d04191e43262af0b2ef460075cd9fcd13640d8d94d. Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.582 [INFO][5362] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.583 [INFO][5362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" iface="eth0" netns="/var/run/netns/cni-956e0c46-2929-577a-f0e7-f188e6eaa4df" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.583 [INFO][5362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" iface="eth0" netns="/var/run/netns/cni-956e0c46-2929-577a-f0e7-f188e6eaa4df" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.585 [INFO][5362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" iface="eth0" netns="/var/run/netns/cni-956e0c46-2929-577a-f0e7-f188e6eaa4df" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.585 [INFO][5362] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.585 [INFO][5362] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.627 [INFO][5377] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.627 [INFO][5377] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.627 [INFO][5377] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.641 [WARNING][5377] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.641 [INFO][5377] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.646 [INFO][5377] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:45.653368 containerd[1990]: 2026-04-21 10:26:45.649 [INFO][5362] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:26:45.655712 containerd[1990]: time="2026-04-21T10:26:45.655045146Z" level=info msg="TearDown network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" successfully" Apr 21 10:26:45.655712 containerd[1990]: time="2026-04-21T10:26:45.655079475Z" level=info msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" returns successfully" Apr 21 10:26:45.664509 containerd[1990]: time="2026-04-21T10:26:45.663903255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwns8,Uid:9ef1622f-119b-4965-be74-eb6954ebbd5e,Namespace:calico-system,Attempt:1,}" Apr 21 10:26:45.721242 containerd[1990]: time="2026-04-21T10:26:45.721196189Z" level=info msg="StartContainer for \"18ed2de8d59d295dbb4930d04191e43262af0b2ef460075cd9fcd13640d8d94d\" returns successfully" Apr 21 10:26:45.781688 systemd[1]: run-netns-cni\x2d956e0c46\x2d2929\x2d577a\x2df0e7\x2df188e6eaa4df.mount: Deactivated successfully. Apr 21 10:26:45.912595 systemd-networkd[1896]: cali2be355f7316: Link UP Apr 21 10:26:45.914676 systemd-networkd[1896]: cali2be355f7316: Gained carrier Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.751 [INFO][5412] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0 csi-node-driver- calico-system 9ef1622f-119b-4965-be74-eb6954ebbd5e 1056 0 2026-04-21 10:26:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-209 csi-node-driver-fwns8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2be355f7316 [] [] }} ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.751 [INFO][5412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.818 [INFO][5432] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" HandleID="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.830 [INFO][5432] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" HandleID="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397410), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"csi-node-driver-fwns8", "timestamp":"2026-04-21 10:26:45.818031882 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115080)} Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.831 [INFO][5432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.831 [INFO][5432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.831 [INFO][5432] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.835 [INFO][5432] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.845 [INFO][5432] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.853 [INFO][5432] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.855 [INFO][5432] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.860 [INFO][5432] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.860 [INFO][5432] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.862 [INFO][5432] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2 Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.873 [INFO][5432] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.893 [INFO][5432] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.4/26] block=192.168.77.0/26 handle="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.901 [INFO][5432] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.4/26] handle="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" host="ip-172-31-16-209" Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.901 [INFO][5432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:45.953248 containerd[1990]: 2026-04-21 10:26:45.901 [INFO][5432] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.4/26] IPv6=[] ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" HandleID="k8s-pod-network.efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.909 [INFO][5412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef1622f-119b-4965-be74-eb6954ebbd5e", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"csi-node-driver-fwns8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2be355f7316", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.909 [INFO][5412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.4/32] ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.909 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2be355f7316 ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.914 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.914 [INFO][5412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef1622f-119b-4965-be74-eb6954ebbd5e", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2", Pod:"csi-node-driver-fwns8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2be355f7316", MAC:"d6:56:5c:5f:92:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:45.956601 containerd[1990]: 2026-04-21 10:26:45.935 [INFO][5412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2" Namespace="calico-system" Pod="csi-node-driver-fwns8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:26:46.033017 containerd[1990]: time="2026-04-21T10:26:46.031851068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:46.033017 containerd[1990]: time="2026-04-21T10:26:46.031920641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:46.033017 containerd[1990]: time="2026-04-21T10:26:46.031939278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:46.033017 containerd[1990]: time="2026-04-21T10:26:46.032037972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:46.104594 systemd[1]: run-containerd-runc-k8s.io-efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2-runc.02mjMx.mount: Deactivated successfully. Apr 21 10:26:46.122573 systemd[1]: Started cri-containerd-efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2.scope - libcontainer container efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2. Apr 21 10:26:46.207448 containerd[1990]: time="2026-04-21T10:26:46.206961495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fwns8,Uid:9ef1622f-119b-4965-be74-eb6954ebbd5e,Namespace:calico-system,Attempt:1,} returns sandbox id \"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2\"" Apr 21 10:26:46.686535 sshd[5114]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:46.691187 systemd[1]: sshd@7-172.31.16.209:22-50.85.169.122:46418.service: Deactivated successfully. Apr 21 10:26:46.694016 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:26:46.695815 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:26:46.698090 systemd-logind[1964]: Removed session 8. Apr 21 10:26:46.855247 systemd-networkd[1896]: calie87afa0ff3a: Gained IPv6LL Apr 21 10:26:47.008539 kubelet[3197]: I0421 10:26:47.008291 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-4g4wj" podStartSLOduration=57.008272498 podStartE2EDuration="57.008272498s" podCreationTimestamp="2026-04-21 10:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:26:46.004052845 +0000 UTC m=+61.764223415" watchObservedRunningTime="2026-04-21 10:26:47.008272498 +0000 UTC m=+62.768443057" Apr 21 10:26:47.174607 systemd-networkd[1896]: calibe80a9a40e7: Gained IPv6LL Apr 21 10:26:47.430566 systemd-networkd[1896]: cali2be355f7316: Gained IPv6LL Apr 21 10:26:47.464236 containerd[1990]: time="2026-04-21T10:26:47.464188961Z" level=info msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" Apr 21 10:26:47.466174 containerd[1990]: time="2026-04-21T10:26:47.465577000Z" level=info msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.624 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.624 [INFO][5542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" iface="eth0" netns="/var/run/netns/cni-282b1d6d-5029-f5d4-b504-f07b562d2350" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.625 [INFO][5542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" iface="eth0" netns="/var/run/netns/cni-282b1d6d-5029-f5d4-b504-f07b562d2350" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.625 [INFO][5542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" iface="eth0" netns="/var/run/netns/cni-282b1d6d-5029-f5d4-b504-f07b562d2350" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.626 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.627 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.745 [INFO][5558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.746 [INFO][5558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.747 [INFO][5558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.767 [WARNING][5558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.767 [INFO][5558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.772 [INFO][5558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:47.790668 containerd[1990]: 2026-04-21 10:26:47.782 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:26:47.796566 containerd[1990]: time="2026-04-21T10:26:47.791056450Z" level=info msg="TearDown network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" successfully" Apr 21 10:26:47.796566 containerd[1990]: time="2026-04-21T10:26:47.791092508Z" level=info msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" returns successfully" Apr 21 10:26:47.796566 containerd[1990]: time="2026-04-21T10:26:47.795334708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sxk82,Uid:ee27081f-d3fb-48c1-8c12-8d86f1601923,Namespace:calico-system,Attempt:1,}" Apr 21 10:26:47.800201 systemd[1]: run-netns-cni\x2d282b1d6d\x2d5029\x2df5d4\x2db504\x2df07b562d2350.mount: Deactivated successfully. Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.634 [INFO][5538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.634 [INFO][5538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" iface="eth0" netns="/var/run/netns/cni-dc34332d-5747-a14a-3fa6-cd906467d5e8" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.634 [INFO][5538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" iface="eth0" netns="/var/run/netns/cni-dc34332d-5747-a14a-3fa6-cd906467d5e8" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.636 [INFO][5538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" iface="eth0" netns="/var/run/netns/cni-dc34332d-5747-a14a-3fa6-cd906467d5e8" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.636 [INFO][5538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.636 [INFO][5538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.754 [INFO][5560] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.754 [INFO][5560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.772 [INFO][5560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.785 [WARNING][5560] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.785 [INFO][5560] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.789 [INFO][5560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:47.805058 containerd[1990]: 2026-04-21 10:26:47.797 [INFO][5538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:26:47.810386 systemd[1]: run-netns-cni\x2ddc34332d\x2d5747\x2da14a\x2d3fa6\x2dcd906467d5e8.mount: Deactivated successfully. Apr 21 10:26:47.810858 containerd[1990]: time="2026-04-21T10:26:47.810818537Z" level=info msg="TearDown network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" successfully" Apr 21 10:26:47.810963 containerd[1990]: time="2026-04-21T10:26:47.810858313Z" level=info msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" returns successfully" Apr 21 10:26:47.816800 containerd[1990]: time="2026-04-21T10:26:47.816729653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f8bbbb7b-2qs25,Uid:7a282b52-42ea-480c-9fa0-f3ad8d196a94,Namespace:calico-system,Attempt:1,}" Apr 21 10:26:48.129030 systemd-networkd[1896]: cali1171b3531ee: Link UP Apr 21 10:26:48.129287 systemd-networkd[1896]: cali1171b3531ee: Gained carrier Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:47.943 [INFO][5572] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0 goldmane-9f7667bb8- calico-system ee27081f-d3fb-48c1-8c12-8d86f1601923 1081 0 2026-04-21 10:26:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-209 goldmane-9f7667bb8-sxk82 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1171b3531ee [] [] }} ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:47.943 [INFO][5572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.004 [INFO][5596] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" HandleID="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.031 [INFO][5596] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" HandleID="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002777c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"goldmane-9f7667bb8-sxk82", "timestamp":"2026-04-21 10:26:48.004536388 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000305760)} Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.031 [INFO][5596] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.031 [INFO][5596] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.031 [INFO][5596] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.037 [INFO][5596] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.045 [INFO][5596] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.057 [INFO][5596] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.063 [INFO][5596] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.072 [INFO][5596] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.072 [INFO][5596] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.078 [INFO][5596] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8 Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.100 [INFO][5596] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.115 [INFO][5596] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.5/26] block=192.168.77.0/26 handle="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.116 [INFO][5596] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.5/26] handle="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" host="ip-172-31-16-209" Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.116 [INFO][5596] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:48.180096 containerd[1990]: 2026-04-21 10:26:48.116 [INFO][5596] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.5/26] IPv6=[] ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" HandleID="k8s-pod-network.0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.124 [INFO][5572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee27081f-d3fb-48c1-8c12-8d86f1601923", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"goldmane-9f7667bb8-sxk82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1171b3531ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.124 [INFO][5572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.5/32] ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.124 [INFO][5572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1171b3531ee ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.128 [INFO][5572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.129 [INFO][5572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee27081f-d3fb-48c1-8c12-8d86f1601923", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8", Pod:"goldmane-9f7667bb8-sxk82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1171b3531ee", MAC:"7a:96:94:1e:0b:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:48.183220 containerd[1990]: 2026-04-21 10:26:48.160 [INFO][5572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sxk82" WorkloadEndpoint="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:26:48.280052 containerd[1990]: time="2026-04-21T10:26:48.278239883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:48.280052 containerd[1990]: time="2026-04-21T10:26:48.278316591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:48.280052 containerd[1990]: time="2026-04-21T10:26:48.278348013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:48.280052 containerd[1990]: time="2026-04-21T10:26:48.278606475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:48.338186 systemd-networkd[1896]: calia2dd2ab7153: Link UP Apr 21 10:26:48.342746 systemd-networkd[1896]: calia2dd2ab7153: Gained carrier Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:47.974 [INFO][5582] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0 calico-kube-controllers-55f8bbbb7b- calico-system 7a282b52-42ea-480c-9fa0-f3ad8d196a94 1082 0 2026-04-21 10:26:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55f8bbbb7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-209 calico-kube-controllers-55f8bbbb7b-2qs25 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2dd2ab7153 [] [] }} ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:47.974 [INFO][5582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.065 [INFO][5602] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" HandleID="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.107 [INFO][5602] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" HandleID="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"calico-kube-controllers-55f8bbbb7b-2qs25", "timestamp":"2026-04-21 10:26:48.065263214 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ec2c0)} Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.107 [INFO][5602] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.116 [INFO][5602] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.116 [INFO][5602] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.141 [INFO][5602] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.167 [INFO][5602] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.186 [INFO][5602] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.193 [INFO][5602] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.205 [INFO][5602] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.205 [INFO][5602] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.212 [INFO][5602] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.234 [INFO][5602] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.255 [INFO][5602] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.6/26] block=192.168.77.0/26 handle="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.256 [INFO][5602] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.6/26] handle="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" host="ip-172-31-16-209" Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.257 [INFO][5602] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:48.396863 containerd[1990]: 2026-04-21 10:26:48.258 [INFO][5602] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.6/26] IPv6=[] ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" HandleID="k8s-pod-network.941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.394291 systemd[1]: Started cri-containerd-0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8.scope - libcontainer container 0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8. Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.289 [INFO][5582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0", GenerateName:"calico-kube-controllers-55f8bbbb7b-", Namespace:"calico-system", SelfLink:"", UID:"7a282b52-42ea-480c-9fa0-f3ad8d196a94", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f8bbbb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-kube-controllers-55f8bbbb7b-2qs25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2dd2ab7153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.293 [INFO][5582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.6/32] ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.296 [INFO][5582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2dd2ab7153 ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.343 [INFO][5582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.349 [INFO][5582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0", GenerateName:"calico-kube-controllers-55f8bbbb7b-", Namespace:"calico-system", SelfLink:"", UID:"7a282b52-42ea-480c-9fa0-f3ad8d196a94", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f8bbbb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd", Pod:"calico-kube-controllers-55f8bbbb7b-2qs25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2dd2ab7153", MAC:"76:28:ba:89:04:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:48.400132 containerd[1990]: 2026-04-21 10:26:48.382 [INFO][5582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd" Namespace="calico-system" Pod="calico-kube-controllers-55f8bbbb7b-2qs25" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:26:48.464925 containerd[1990]: time="2026-04-21T10:26:48.464567011Z" level=info msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" Apr 21 10:26:48.627490 containerd[1990]: time="2026-04-21T10:26:48.626112600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:48.627490 containerd[1990]: time="2026-04-21T10:26:48.626216646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:48.627490 containerd[1990]: time="2026-04-21T10:26:48.626264450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:48.627490 containerd[1990]: time="2026-04-21T10:26:48.626468678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:48.628423 containerd[1990]: time="2026-04-21T10:26:48.628365308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sxk82,Uid:ee27081f-d3fb-48c1-8c12-8d86f1601923,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8\"" Apr 21 10:26:48.674471 systemd[1]: Started cri-containerd-941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd.scope - libcontainer container 941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd. Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.721 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.722 [INFO][5688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" iface="eth0" netns="/var/run/netns/cni-e2de1d25-a066-da76-9a5d-6060b583b1ad" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.723 [INFO][5688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" iface="eth0" netns="/var/run/netns/cni-e2de1d25-a066-da76-9a5d-6060b583b1ad" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.723 [INFO][5688] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" iface="eth0" netns="/var/run/netns/cni-e2de1d25-a066-da76-9a5d-6060b583b1ad" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.723 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.723 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.824 [INFO][5745] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.824 [INFO][5745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.824 [INFO][5745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.845 [WARNING][5745] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.845 [INFO][5745] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.850 [INFO][5745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:48.869242 containerd[1990]: 2026-04-21 10:26:48.857 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:26:48.871031 containerd[1990]: time="2026-04-21T10:26:48.870059002Z" level=info msg="TearDown network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" successfully" Apr 21 10:26:48.871031 containerd[1990]: time="2026-04-21T10:26:48.870139241Z" level=info msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" returns successfully" Apr 21 10:26:48.875855 containerd[1990]: time="2026-04-21T10:26:48.875310782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f8bbbb7b-2qs25,Uid:7a282b52-42ea-480c-9fa0-f3ad8d196a94,Namespace:calico-system,Attempt:1,} returns sandbox id \"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd\"" Apr 21 10:26:48.877245 systemd[1]: run-netns-cni\x2de2de1d25\x2da066\x2dda76\x2d9a5d\x2d6060b583b1ad.mount: Deactivated successfully. Apr 21 10:26:48.882488 containerd[1990]: time="2026-04-21T10:26:48.882332734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-7d8jv,Uid:ce8a55a1-902f-4d92-8943-f6c346590495,Namespace:calico-system,Attempt:1,}" Apr 21 10:26:49.147162 systemd-networkd[1896]: cali78504a00d11: Link UP Apr 21 10:26:49.148218 systemd-networkd[1896]: cali78504a00d11: Gained carrier Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.007 [INFO][5759] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0 calico-apiserver-649f9d4c46- calico-system ce8a55a1-902f-4d92-8943-f6c346590495 1097 0 2026-04-21 10:26:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649f9d4c46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-209 calico-apiserver-649f9d4c46-7d8jv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali78504a00d11 [] [] }} ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.007 [INFO][5759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.064 [INFO][5771] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" HandleID="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.079 [INFO][5771] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" HandleID="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"calico-apiserver-649f9d4c46-7d8jv", "timestamp":"2026-04-21 10:26:49.06404384 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.080 [INFO][5771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.080 [INFO][5771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.080 [INFO][5771] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.083 [INFO][5771] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.090 [INFO][5771] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.102 [INFO][5771] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.106 [INFO][5771] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.111 [INFO][5771] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.111 [INFO][5771] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.114 [INFO][5771] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6 Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.121 [INFO][5771] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.135 [INFO][5771] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.7/26] block=192.168.77.0/26 handle="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.135 [INFO][5771] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.7/26] handle="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" host="ip-172-31-16-209" Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.135 [INFO][5771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:49.180822 containerd[1990]: 2026-04-21 10:26:49.135 [INFO][5771] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.7/26] IPv6=[] ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" HandleID="k8s-pod-network.b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.139 [INFO][5759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"ce8a55a1-902f-4d92-8943-f6c346590495", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-apiserver-649f9d4c46-7d8jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali78504a00d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.140 [INFO][5759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.7/32] ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.140 [INFO][5759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78504a00d11 ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.148 [INFO][5759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.150 [INFO][5759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"ce8a55a1-902f-4d92-8943-f6c346590495", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6", Pod:"calico-apiserver-649f9d4c46-7d8jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali78504a00d11", MAC:"5e:74:60:9d:34:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:49.181610 containerd[1990]: 2026-04-21 10:26:49.174 [INFO][5759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6" Namespace="calico-system" Pod="calico-apiserver-649f9d4c46-7d8jv" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:26:49.221984 systemd-networkd[1896]: cali1171b3531ee: Gained IPv6LL Apr 21 10:26:49.254674 containerd[1990]: time="2026-04-21T10:26:49.252913492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:49.254674 containerd[1990]: time="2026-04-21T10:26:49.253002038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:49.254674 containerd[1990]: time="2026-04-21T10:26:49.253040594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:49.254674 containerd[1990]: time="2026-04-21T10:26:49.253167556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:49.303819 systemd[1]: Started cri-containerd-b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6.scope - libcontainer container b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6. Apr 21 10:26:49.435385 containerd[1990]: time="2026-04-21T10:26:49.435221588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649f9d4c46-7d8jv,Uid:ce8a55a1-902f-4d92-8943-f6c346590495,Namespace:calico-system,Attempt:1,} returns sandbox id \"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6\"" Apr 21 10:26:49.452454 containerd[1990]: time="2026-04-21T10:26:49.452168796Z" level=info msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.555 [INFO][5845] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.555 [INFO][5845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" iface="eth0" netns="/var/run/netns/cni-05c889e8-9016-bd17-fa47-6e99edf18237" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.559 [INFO][5845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" iface="eth0" netns="/var/run/netns/cni-05c889e8-9016-bd17-fa47-6e99edf18237" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.561 [INFO][5845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" iface="eth0" netns="/var/run/netns/cni-05c889e8-9016-bd17-fa47-6e99edf18237" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.561 [INFO][5845] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.561 [INFO][5845] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.618 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.618 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.619 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.628 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.628 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.631 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:49.638666 containerd[1990]: 2026-04-21 10:26:49.633 [INFO][5845] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:26:49.640115 containerd[1990]: time="2026-04-21T10:26:49.639047802Z" level=info msg="TearDown network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" successfully" Apr 21 10:26:49.640115 containerd[1990]: time="2026-04-21T10:26:49.639082097Z" level=info msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" returns successfully" Apr 21 10:26:49.643258 containerd[1990]: time="2026-04-21T10:26:49.643219529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lw96w,Uid:b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb,Namespace:kube-system,Attempt:1,}" Apr 21 10:26:49.800527 systemd[1]: run-netns-cni\x2d05c889e8\x2d9016\x2dbd17\x2dfa47\x2d6e99edf18237.mount: Deactivated successfully. Apr 21 10:26:49.884901 systemd-networkd[1896]: cali2d98ec5fd1f: Link UP Apr 21 10:26:49.885211 systemd-networkd[1896]: cali2d98ec5fd1f: Gained carrier Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.737 [INFO][5860] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0 coredns-7d764666f9- kube-system b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb 1110 0 2026-04-21 10:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-209 coredns-7d764666f9-lw96w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d98ec5fd1f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.737 [INFO][5860] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.796 [INFO][5873] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" HandleID="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.817 [INFO][5873] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" HandleID="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef440), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-209", "pod":"coredns-7d764666f9-lw96w", "timestamp":"2026-04-21 10:26:49.796058852 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f11e0)} Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.817 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.817 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.818 [INFO][5873] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.822 [INFO][5873] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.831 [INFO][5873] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.840 [INFO][5873] ipam/ipam.go 526: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.847 [INFO][5873] ipam/ipam.go 160: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.853 [INFO][5873] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.853 [INFO][5873] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.856 [INFO][5873] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65 Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.862 [INFO][5873] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.874 [INFO][5873] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.77.8/26] block=192.168.77.0/26 handle="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.874 [INFO][5873] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.77.8/26] handle="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" host="ip-172-31-16-209" Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.875 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:26:49.918701 containerd[1990]: 2026-04-21 10:26:49.875 [INFO][5873] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.77.8/26] IPv6=[] ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" HandleID="k8s-pod-network.c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.880 [INFO][5860] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"coredns-7d764666f9-lw96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d98ec5fd1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.880 [INFO][5860] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.8/32] ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.880 [INFO][5860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d98ec5fd1f ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.884 [INFO][5860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.884 [INFO][5860] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65", Pod:"coredns-7d764666f9-lw96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d98ec5fd1f", MAC:"22:eb:4f:90:a8:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:26:49.919897 containerd[1990]: 2026-04-21 10:26:49.913 [INFO][5860] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65" Namespace="kube-system" Pod="coredns-7d764666f9-lw96w" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:26:49.927373 systemd-networkd[1896]: calia2dd2ab7153: Gained IPv6LL Apr 21 10:26:49.998239 containerd[1990]: time="2026-04-21T10:26:49.997601081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:26:49.998239 containerd[1990]: time="2026-04-21T10:26:49.997692118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:26:50.003550 containerd[1990]: time="2026-04-21T10:26:50.003185705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:50.003550 containerd[1990]: time="2026-04-21T10:26:50.003377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:26:50.070294 systemd[1]: Started cri-containerd-c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65.scope - libcontainer container c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65. Apr 21 10:26:50.163573 containerd[1990]: time="2026-04-21T10:26:50.163438296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lw96w,Uid:b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb,Namespace:kube-system,Attempt:1,} returns sandbox id \"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65\"" Apr 21 10:26:50.175347 containerd[1990]: time="2026-04-21T10:26:50.174752100Z" level=info msg="CreateContainer within sandbox \"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:26:50.230358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43647232.mount: Deactivated successfully. Apr 21 10:26:50.243230 containerd[1990]: time="2026-04-21T10:26:50.243163392Z" level=info msg="CreateContainer within sandbox \"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"893b1d7f5d30441f54334807ea096abfb13270d3180dbe030e47d5d3d377ff66\"" Apr 21 10:26:50.246041 containerd[1990]: time="2026-04-21T10:26:50.246004048Z" level=info msg="StartContainer for \"893b1d7f5d30441f54334807ea096abfb13270d3180dbe030e47d5d3d377ff66\"" Apr 21 10:26:50.307982 systemd[1]: Started cri-containerd-893b1d7f5d30441f54334807ea096abfb13270d3180dbe030e47d5d3d377ff66.scope - libcontainer container 893b1d7f5d30441f54334807ea096abfb13270d3180dbe030e47d5d3d377ff66. Apr 21 10:26:50.365377 containerd[1990]: time="2026-04-21T10:26:50.365261885Z" level=info msg="StartContainer for \"893b1d7f5d30441f54334807ea096abfb13270d3180dbe030e47d5d3d377ff66\" returns successfully" Apr 21 10:26:50.438559 containerd[1990]: time="2026-04-21T10:26:50.438503882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:50.443090 containerd[1990]: time="2026-04-21T10:26:50.443003177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:26:50.469097 containerd[1990]: time="2026-04-21T10:26:50.468990054Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:50.495835 containerd[1990]: time="2026-04-21T10:26:50.495540502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:50.496677 containerd[1990]: time="2026-04-21T10:26:50.496627332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.069206513s" Apr 21 10:26:50.496677 containerd[1990]: time="2026-04-21T10:26:50.496668170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:26:50.498147 containerd[1990]: time="2026-04-21T10:26:50.498003273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:26:50.509461 containerd[1990]: time="2026-04-21T10:26:50.509418663Z" level=info msg="CreateContainer within sandbox \"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:26:50.537792 containerd[1990]: time="2026-04-21T10:26:50.537711892Z" level=info msg="CreateContainer within sandbox \"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2590b52dcfd54813f176a00210b2c7fd402b389a06117df2d02a35f30f0de5e2\"" Apr 21 10:26:50.553068 containerd[1990]: time="2026-04-21T10:26:50.553005152Z" level=info msg="StartContainer for \"2590b52dcfd54813f176a00210b2c7fd402b389a06117df2d02a35f30f0de5e2\"" Apr 21 10:26:50.581996 systemd[1]: Started cri-containerd-2590b52dcfd54813f176a00210b2c7fd402b389a06117df2d02a35f30f0de5e2.scope - libcontainer container 2590b52dcfd54813f176a00210b2c7fd402b389a06117df2d02a35f30f0de5e2. Apr 21 10:26:50.638851 containerd[1990]: time="2026-04-21T10:26:50.635924834Z" level=info msg="StartContainer for \"2590b52dcfd54813f176a00210b2c7fd402b389a06117df2d02a35f30f0de5e2\" returns successfully" Apr 21 10:26:51.078407 systemd-networkd[1896]: cali2d98ec5fd1f: Gained IPv6LL Apr 21 10:26:51.156139 kubelet[3197]: I0421 10:26:51.155544 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-649f9d4c46-8gqsh" podStartSLOduration=43.031020017 podStartE2EDuration="48.102130134s" podCreationTimestamp="2026-04-21 10:26:03 +0000 UTC" firstStartedPulling="2026-04-21 10:26:45.42663624 +0000 UTC m=+61.186806800" lastFinishedPulling="2026-04-21 10:26:50.49774638 +0000 UTC m=+66.257916917" observedRunningTime="2026-04-21 10:26:51.101596439 +0000 UTC m=+66.861767001" watchObservedRunningTime="2026-04-21 10:26:51.102130134 +0000 UTC m=+66.862300691" Apr 21 10:26:51.205993 systemd-networkd[1896]: cali78504a00d11: Gained IPv6LL Apr 21 10:26:51.864216 systemd[1]: Started sshd@8-172.31.16.209:22-50.85.169.122:41760.service - OpenSSH per-connection server daemon (50.85.169.122:41760). Apr 21 10:26:52.051080 kubelet[3197]: I0421 10:26:52.051036 3197 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:26:52.388587 containerd[1990]: time="2026-04-21T10:26:52.388531431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:52.390664 containerd[1990]: time="2026-04-21T10:26:52.390596366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:26:52.393183 containerd[1990]: time="2026-04-21T10:26:52.393084178Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:52.409597 containerd[1990]: time="2026-04-21T10:26:52.408710823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:52.409597 containerd[1990]: time="2026-04-21T10:26:52.409382085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.91134089s" Apr 21 10:26:52.409597 containerd[1990]: time="2026-04-21T10:26:52.409423140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:26:52.419166 containerd[1990]: time="2026-04-21T10:26:52.419126031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:26:52.460482 containerd[1990]: time="2026-04-21T10:26:52.460431155Z" level=info msg="CreateContainer within sandbox \"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:26:52.518237 containerd[1990]: time="2026-04-21T10:26:52.518183537Z" level=info msg="CreateContainer within sandbox \"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5abbcb89397fadad08f81c6292e75978d46e0276cc9e661350b9021a547cc7a1\"" Apr 21 10:26:52.518872 containerd[1990]: time="2026-04-21T10:26:52.518836417Z" level=info msg="StartContainer for \"5abbcb89397fadad08f81c6292e75978d46e0276cc9e661350b9021a547cc7a1\"" Apr 21 10:26:52.561673 systemd[1]: Started cri-containerd-5abbcb89397fadad08f81c6292e75978d46e0276cc9e661350b9021a547cc7a1.scope - libcontainer container 5abbcb89397fadad08f81c6292e75978d46e0276cc9e661350b9021a547cc7a1. Apr 21 10:26:52.600406 containerd[1990]: time="2026-04-21T10:26:52.600354168Z" level=info msg="StartContainer for \"5abbcb89397fadad08f81c6292e75978d46e0276cc9e661350b9021a547cc7a1\" returns successfully" Apr 21 10:26:52.954021 sshd[6037]: Accepted publickey for core from 50.85.169.122 port 41760 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:26:52.962507 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:52.973239 systemd-logind[1964]: New session 9 of user core. Apr 21 10:26:52.979989 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:26:53.384745 ntpd[1958]: Listen normally on 11 calie87afa0ff3a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 11 calie87afa0ff3a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 12 calibe80a9a40e7 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 13 cali2be355f7316 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 14 cali1171b3531ee [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 15 calia2dd2ab7153 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 16 cali78504a00d11 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 21 10:26:53.385580 ntpd[1958]: 21 Apr 10:26:53 ntpd[1958]: Listen normally on 17 cali2d98ec5fd1f [fe80::ecee:eeff:feee:eeee%14]:123 Apr 21 10:26:53.384873 ntpd[1958]: Listen normally on 12 calibe80a9a40e7 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:26:53.384919 ntpd[1958]: Listen normally on 13 cali2be355f7316 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:26:53.384959 ntpd[1958]: Listen normally on 14 cali1171b3531ee [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:26:53.384998 ntpd[1958]: Listen normally on 15 calia2dd2ab7153 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 21 10:26:53.385035 ntpd[1958]: Listen normally on 16 cali78504a00d11 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 21 10:26:53.385077 ntpd[1958]: Listen normally on 17 cali2d98ec5fd1f [fe80::ecee:eeff:feee:eeee%14]:123 Apr 21 10:26:54.364104 sshd[6037]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:54.370892 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:26:54.371999 systemd[1]: sshd@8-172.31.16.209:22-50.85.169.122:41760.service: Deactivated successfully. Apr 21 10:26:54.374798 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:26:54.376281 systemd-logind[1964]: Removed session 9. Apr 21 10:26:57.018372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058280737.mount: Deactivated successfully. Apr 21 10:26:57.805010 containerd[1990]: time="2026-04-21T10:26:57.804953628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:57.806958 containerd[1990]: time="2026-04-21T10:26:57.806789277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:26:57.809832 containerd[1990]: time="2026-04-21T10:26:57.809731346Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:57.815378 containerd[1990]: time="2026-04-21T10:26:57.814468762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:26:57.815378 containerd[1990]: time="2026-04-21T10:26:57.815242358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.396074834s" Apr 21 10:26:57.815378 containerd[1990]: time="2026-04-21T10:26:57.815280864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:26:57.869714 containerd[1990]: time="2026-04-21T10:26:57.869023083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:26:57.996191 containerd[1990]: time="2026-04-21T10:26:57.996139165Z" level=info msg="CreateContainer within sandbox \"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:26:58.060175 containerd[1990]: time="2026-04-21T10:26:58.059989478Z" level=info msg="CreateContainer within sandbox \"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36\"" Apr 21 10:26:58.074930 containerd[1990]: time="2026-04-21T10:26:58.073678229Z" level=info msg="StartContainer for \"cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36\"" Apr 21 10:26:58.147931 systemd[1]: Started cri-containerd-cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36.scope - libcontainer container cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36. Apr 21 10:26:58.239126 containerd[1990]: time="2026-04-21T10:26:58.235080981Z" level=info msg="StartContainer for \"cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36\" returns successfully" Apr 21 10:26:58.545079 systemd[1]: run-containerd-runc-k8s.io-cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36-runc.jjxPk7.mount: Deactivated successfully. Apr 21 10:26:58.550641 kubelet[3197]: I0421 10:26:58.506101 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lw96w" podStartSLOduration=68.472377503 podStartE2EDuration="1m8.472377503s" podCreationTimestamp="2026-04-21 10:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:26:51.157133806 +0000 UTC m=+66.917304362" watchObservedRunningTime="2026-04-21 10:26:58.472377503 +0000 UTC m=+74.232548064" Apr 21 10:26:58.555395 kubelet[3197]: I0421 10:26:58.553723 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-sxk82" podStartSLOduration=47.320499673 podStartE2EDuration="56.553710127s" podCreationTimestamp="2026-04-21 10:26:02 +0000 UTC" firstStartedPulling="2026-04-21 10:26:48.634481132 +0000 UTC m=+64.394651669" lastFinishedPulling="2026-04-21 10:26:57.867691558 +0000 UTC m=+73.627862123" observedRunningTime="2026-04-21 10:26:58.55307328 +0000 UTC m=+74.313243843" watchObservedRunningTime="2026-04-21 10:26:58.553710127 +0000 UTC m=+74.313880688" Apr 21 10:26:59.537910 systemd[1]: Started sshd@9-172.31.16.209:22-50.85.169.122:41764.service - OpenSSH per-connection server daemon (50.85.169.122:41764). Apr 21 10:27:00.421290 systemd[1]: run-containerd-runc-k8s.io-cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36-runc.DK3PKp.mount: Deactivated successfully. Apr 21 10:27:00.744733 sshd[6204]: Accepted publickey for core from 50.85.169.122 port 41764 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:00.752632 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:00.760185 systemd-logind[1964]: New session 10 of user core. Apr 21 10:27:00.765982 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:27:02.702439 containerd[1990]: time="2026-04-21T10:27:02.702377588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:02.705272 containerd[1990]: time="2026-04-21T10:27:02.705208701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:27:02.716059 containerd[1990]: time="2026-04-21T10:27:02.715892937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.846818588s" Apr 21 10:27:02.716059 containerd[1990]: time="2026-04-21T10:27:02.715948749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:27:02.894884 containerd[1990]: time="2026-04-21T10:27:02.891910332Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:02.894884 containerd[1990]: time="2026-04-21T10:27:02.893285883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:02.896875 containerd[1990]: time="2026-04-21T10:27:02.896506157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:27:03.238944 sshd[6204]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:03.248620 systemd[1]: sshd@9-172.31.16.209:22-50.85.169.122:41764.service: Deactivated successfully. Apr 21 10:27:03.250121 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:27:03.254027 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:27:03.256099 systemd-logind[1964]: Removed session 10. Apr 21 10:27:03.383680 containerd[1990]: time="2026-04-21T10:27:03.383621454Z" level=info msg="CreateContainer within sandbox \"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:27:03.425151 systemd[1]: Started sshd@10-172.31.16.209:22-50.85.169.122:50576.service - OpenSSH per-connection server daemon (50.85.169.122:50576). Apr 21 10:27:03.517522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930125817.mount: Deactivated successfully. Apr 21 10:27:03.565793 containerd[1990]: time="2026-04-21T10:27:03.565341127Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:03.565793 containerd[1990]: time="2026-04-21T10:27:03.565412422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:27:03.567395 containerd[1990]: time="2026-04-21T10:27:03.567347877Z" level=info msg="CreateContainer within sandbox \"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606\"" Apr 21 10:27:03.568898 containerd[1990]: time="2026-04-21T10:27:03.568644481Z" level=info msg="StartContainer for \"dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606\"" Apr 21 10:27:03.571452 containerd[1990]: time="2026-04-21T10:27:03.571402602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 674.845287ms" Apr 21 10:27:03.571452 containerd[1990]: time="2026-04-21T10:27:03.571454852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:27:03.573117 containerd[1990]: time="2026-04-21T10:27:03.573075709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:27:03.583877 containerd[1990]: time="2026-04-21T10:27:03.583807906Z" level=info msg="CreateContainer within sandbox \"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:27:03.638149 containerd[1990]: time="2026-04-21T10:27:03.638098079Z" level=info msg="CreateContainer within sandbox \"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"171e34f2c7641183f2770bce3bdcf377e50495d4e69deae558c3158fd6f4bf16\"" Apr 21 10:27:03.640071 containerd[1990]: time="2026-04-21T10:27:03.639967621Z" level=info msg="StartContainer for \"171e34f2c7641183f2770bce3bdcf377e50495d4e69deae558c3158fd6f4bf16\"" Apr 21 10:27:03.693030 systemd[1]: Started cri-containerd-171e34f2c7641183f2770bce3bdcf377e50495d4e69deae558c3158fd6f4bf16.scope - libcontainer container 171e34f2c7641183f2770bce3bdcf377e50495d4e69deae558c3158fd6f4bf16. Apr 21 10:27:03.696147 systemd[1]: Started cri-containerd-dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606.scope - libcontainer container dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606. Apr 21 10:27:03.813237 containerd[1990]: time="2026-04-21T10:27:03.812493391Z" level=info msg="StartContainer for \"171e34f2c7641183f2770bce3bdcf377e50495d4e69deae558c3158fd6f4bf16\" returns successfully" Apr 21 10:27:03.832250 containerd[1990]: time="2026-04-21T10:27:03.830947780Z" level=info msg="StartContainer for \"dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606\" returns successfully" Apr 21 10:27:04.404176 systemd[1]: run-containerd-runc-k8s.io-cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36-runc.1GZtyc.mount: Deactivated successfully. Apr 21 10:27:04.503771 sshd[6280]: Accepted publickey for core from 50.85.169.122 port 50576 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:04.516087 sshd[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:04.549716 systemd-logind[1964]: New session 11 of user core. Apr 21 10:27:04.560243 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:27:05.001063 systemd[1]: run-containerd-runc-k8s.io-81df20b146569fcf63c008a2cc15ebe7cc53641b816928658bee76665ea80127-runc.zEWJFN.mount: Deactivated successfully. Apr 21 10:27:05.627005 kubelet[3197]: I0421 10:27:05.602976 3197 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:27:05.805289 systemd[1]: run-containerd-runc-k8s.io-dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606-runc.mgatzj.mount: Deactivated successfully. Apr 21 10:27:06.092536 kubelet[3197]: I0421 10:27:06.092039 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55f8bbbb7b-2qs25" podStartSLOduration=48.081363467 podStartE2EDuration="1m2.071614618s" podCreationTimestamp="2026-04-21 10:26:04 +0000 UTC" firstStartedPulling="2026-04-21 10:26:48.888023038 +0000 UTC m=+64.648193577" lastFinishedPulling="2026-04-21 10:27:02.878274171 +0000 UTC m=+78.638444728" observedRunningTime="2026-04-21 10:27:05.947179124 +0000 UTC m=+81.707349685" watchObservedRunningTime="2026-04-21 10:27:06.071614618 +0000 UTC m=+81.831785177" Apr 21 10:27:06.115473 kubelet[3197]: I0421 10:27:06.094180 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-649f9d4c46-7d8jv" podStartSLOduration=48.960603787 podStartE2EDuration="1m3.094160717s" podCreationTimestamp="2026-04-21 10:26:03 +0000 UTC" firstStartedPulling="2026-04-21 10:26:49.439240602 +0000 UTC m=+65.199411139" lastFinishedPulling="2026-04-21 10:27:03.572797511 +0000 UTC m=+79.332968069" observedRunningTime="2026-04-21 10:27:05.884750763 +0000 UTC m=+81.644921323" watchObservedRunningTime="2026-04-21 10:27:06.094160717 +0000 UTC m=+81.854331278" Apr 21 10:27:06.984778 sshd[6280]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:06.995345 systemd[1]: sshd@10-172.31.16.209:22-50.85.169.122:50576.service: Deactivated successfully. Apr 21 10:27:07.001139 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:27:07.011989 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:27:07.017657 systemd-logind[1964]: Removed session 11. Apr 21 10:27:07.183179 systemd[1]: Started sshd@11-172.31.16.209:22-50.85.169.122:50580.service - OpenSSH per-connection server daemon (50.85.169.122:50580). Apr 21 10:27:08.344895 sshd[6451]: Accepted publickey for core from 50.85.169.122 port 50580 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:08.352193 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:08.361127 systemd-logind[1964]: New session 12 of user core. Apr 21 10:27:08.368547 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:27:08.452999 containerd[1990]: time="2026-04-21T10:27:08.452944138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:08.456822 containerd[1990]: time="2026-04-21T10:27:08.456100753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:27:08.459715 containerd[1990]: time="2026-04-21T10:27:08.459027408Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:08.510844 containerd[1990]: time="2026-04-21T10:27:08.510746363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:27:08.513307 containerd[1990]: time="2026-04-21T10:27:08.512377403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.939259961s" Apr 21 10:27:08.513307 containerd[1990]: time="2026-04-21T10:27:08.512431080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:27:08.681183 containerd[1990]: time="2026-04-21T10:27:08.681127944Z" level=info msg="CreateContainer within sandbox \"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:27:08.757050 containerd[1990]: time="2026-04-21T10:27:08.756999240Z" level=info msg="CreateContainer within sandbox \"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38\"" Apr 21 10:27:08.761255 containerd[1990]: time="2026-04-21T10:27:08.761154110Z" level=info msg="StartContainer for \"c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38\"" Apr 21 10:27:09.914098 systemd[1]: run-containerd-runc-k8s.io-c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38-runc.heJLKt.mount: Deactivated successfully. Apr 21 10:27:09.939437 systemd[1]: Started cri-containerd-c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38.scope - libcontainer container c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38. Apr 21 10:27:10.073139 containerd[1990]: time="2026-04-21T10:27:10.072129127Z" level=info msg="StartContainer for \"c2f06168f2801a781c0d5eba602f2248601409963c1814c96a7e811c43383f38\" returns successfully" Apr 21 10:27:10.422239 kubelet[3197]: I0421 10:27:10.404483 3197 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-fwns8" podStartSLOduration=43.96322864 podStartE2EDuration="1m6.35657883s" podCreationTimestamp="2026-04-21 10:26:04 +0000 UTC" firstStartedPulling="2026-04-21 10:26:46.21069415 +0000 UTC m=+61.970864687" lastFinishedPulling="2026-04-21 10:27:08.604044339 +0000 UTC m=+84.364214877" observedRunningTime="2026-04-21 10:27:10.355308294 +0000 UTC m=+86.115478858" watchObservedRunningTime="2026-04-21 10:27:10.35657883 +0000 UTC m=+86.116749387" Apr 21 10:27:10.703798 sshd[6451]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:10.707671 systemd[1]: sshd@11-172.31.16.209:22-50.85.169.122:50580.service: Deactivated successfully. Apr 21 10:27:10.710299 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:27:10.712605 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:27:10.714596 systemd-logind[1964]: Removed session 12. Apr 21 10:27:10.872353 kubelet[3197]: I0421 10:27:10.870784 3197 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:27:10.878328 kubelet[3197]: I0421 10:27:10.878285 3197 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:27:15.897874 systemd[1]: Started sshd@12-172.31.16.209:22-50.85.169.122:38400.service - OpenSSH per-connection server daemon (50.85.169.122:38400). Apr 21 10:27:17.013160 sshd[6517]: Accepted publickey for core from 50.85.169.122 port 38400 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:17.016352 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:17.025023 systemd-logind[1964]: New session 13 of user core. Apr 21 10:27:17.031996 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:27:18.040658 sshd[6517]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:18.046278 systemd[1]: sshd@12-172.31.16.209:22-50.85.169.122:38400.service: Deactivated successfully. Apr 21 10:27:18.048972 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:27:18.050201 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:27:18.051543 systemd-logind[1964]: Removed session 13. Apr 21 10:27:18.220102 systemd[1]: Started sshd@13-172.31.16.209:22-50.85.169.122:38404.service - OpenSSH per-connection server daemon (50.85.169.122:38404). Apr 21 10:27:19.237641 sshd[6531]: Accepted publickey for core from 50.85.169.122 port 38404 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:19.240319 sshd[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:19.245879 systemd-logind[1964]: New session 14 of user core. Apr 21 10:27:19.255030 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:27:21.098543 sshd[6531]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:21.103183 systemd[1]: sshd@13-172.31.16.209:22-50.85.169.122:38404.service: Deactivated successfully. Apr 21 10:27:21.106245 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:27:21.108053 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:27:21.109722 systemd-logind[1964]: Removed session 14. Apr 21 10:27:21.287118 systemd[1]: Started sshd@14-172.31.16.209:22-50.85.169.122:49966.service - OpenSSH per-connection server daemon (50.85.169.122:49966). Apr 21 10:27:22.357486 sshd[6554]: Accepted publickey for core from 50.85.169.122 port 49966 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:22.359259 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:22.364894 systemd-logind[1964]: New session 15 of user core. Apr 21 10:27:22.366962 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:27:23.942125 sshd[6554]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:23.955005 systemd[1]: sshd@14-172.31.16.209:22-50.85.169.122:49966.service: Deactivated successfully. Apr 21 10:27:23.958050 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:27:23.959112 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:27:23.960538 systemd-logind[1964]: Removed session 15. Apr 21 10:27:24.127235 systemd[1]: Started sshd@15-172.31.16.209:22-50.85.169.122:49982.service - OpenSSH per-connection server daemon (50.85.169.122:49982). Apr 21 10:27:25.210687 sshd[6578]: Accepted publickey for core from 50.85.169.122 port 49982 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:25.214235 sshd[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:25.221085 systemd-logind[1964]: New session 16 of user core. Apr 21 10:27:25.225053 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:27:25.934338 systemd[1]: run-containerd-runc-k8s.io-dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606-runc.5jlShU.mount: Deactivated successfully. Apr 21 10:27:27.089616 sshd[6578]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:27.094651 systemd[1]: sshd@15-172.31.16.209:22-50.85.169.122:49982.service: Deactivated successfully. Apr 21 10:27:27.097683 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:27:27.100127 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:27:27.101350 systemd-logind[1964]: Removed session 16. Apr 21 10:27:27.264235 systemd[1]: Started sshd@16-172.31.16.209:22-50.85.169.122:49992.service - OpenSSH per-connection server daemon (50.85.169.122:49992). Apr 21 10:27:28.333073 sshd[6612]: Accepted publickey for core from 50.85.169.122 port 49992 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:28.334731 sshd[6612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:28.339925 systemd-logind[1964]: New session 17 of user core. Apr 21 10:27:28.348987 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:27:29.272129 sshd[6612]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:29.276146 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:27:29.276496 systemd[1]: sshd@16-172.31.16.209:22-50.85.169.122:49992.service: Deactivated successfully. Apr 21 10:27:29.279034 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:27:29.281831 systemd-logind[1964]: Removed session 17. Apr 21 10:27:34.446230 systemd[1]: Started sshd@17-172.31.16.209:22-50.85.169.122:48968.service - OpenSSH per-connection server daemon (50.85.169.122:48968). Apr 21 10:27:35.539320 sshd[6652]: Accepted publickey for core from 50.85.169.122 port 48968 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:35.544158 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:35.551599 systemd-logind[1964]: New session 18 of user core. Apr 21 10:27:35.560050 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:27:35.701646 systemd[1]: run-containerd-runc-k8s.io-dc1a7081167be5482c8cedf6ff1b99e1a8d0ac39d694466208c1c9710c262606-runc.A6rDMY.mount: Deactivated successfully. Apr 21 10:27:37.401108 sshd[6652]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:37.417744 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:27:37.422324 systemd[1]: sshd@17-172.31.16.209:22-50.85.169.122:48968.service: Deactivated successfully. Apr 21 10:27:37.428574 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:27:37.430901 systemd-logind[1964]: Removed session 18. Apr 21 10:27:42.571505 systemd[1]: Started sshd@18-172.31.16.209:22-50.85.169.122:51356.service - OpenSSH per-connection server daemon (50.85.169.122:51356). Apr 21 10:27:43.658580 sshd[6704]: Accepted publickey for core from 50.85.169.122 port 51356 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:43.664675 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:43.671420 systemd-logind[1964]: New session 19 of user core. Apr 21 10:27:43.674991 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:27:45.200036 containerd[1990]: time="2026-04-21T10:27:45.157720206Z" level=info msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" Apr 21 10:27:45.439869 sshd[6704]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:45.444995 systemd[1]: sshd@18-172.31.16.209:22-50.85.169.122:51356.service: Deactivated successfully. Apr 21 10:27:45.447943 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:27:45.448929 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:27:45.450313 systemd-logind[1964]: Removed session 19. Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:45.760 [WARNING][6724] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5025f349-a8c7-438c-8426-5f946767fac4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98", Pod:"coredns-7d764666f9-4g4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe80a9a40e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:45.783 [INFO][6724] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:45.783 [INFO][6724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" iface="eth0" netns="" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:45.783 [INFO][6724] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:45.783 [INFO][6724] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.154 [INFO][6733] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.159 [INFO][6733] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.159 [INFO][6733] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.177 [WARNING][6733] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.177 [INFO][6733] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.183 [INFO][6733] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:46.188668 containerd[1990]: 2026-04-21 10:27:46.186 [INFO][6724] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.195085 containerd[1990]: time="2026-04-21T10:27:46.193441398Z" level=info msg="TearDown network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" successfully" Apr 21 10:27:46.195085 containerd[1990]: time="2026-04-21T10:27:46.193494175Z" level=info msg="StopPodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" returns successfully" Apr 21 10:27:46.198927 containerd[1990]: time="2026-04-21T10:27:46.198884538Z" level=info msg="RemovePodSandbox for \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" Apr 21 10:27:46.202666 containerd[1990]: time="2026-04-21T10:27:46.202618059Z" level=info msg="Forcibly stopping sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\"" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.252 [WARNING][6747] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5025f349-a8c7-438c-8426-5f946767fac4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"50c6fa20a5b1665d3e2a11613d7a2c2befffef52b15f49a9e815e1e2974f3e98", Pod:"coredns-7d764666f9-4g4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe80a9a40e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.252 [INFO][6747] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.252 [INFO][6747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" iface="eth0" netns="" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.252 [INFO][6747] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.252 [INFO][6747] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.286 [INFO][6754] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.286 [INFO][6754] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.286 [INFO][6754] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.293 [WARNING][6754] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.293 [INFO][6754] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" HandleID="k8s-pod-network.8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--4g4wj-eth0" Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.295 [INFO][6754] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:46.302916 containerd[1990]: 2026-04-21 10:27:46.299 [INFO][6747] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89" Apr 21 10:27:46.303803 containerd[1990]: time="2026-04-21T10:27:46.302962874Z" level=info msg="TearDown network for sandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" successfully" Apr 21 10:27:46.368904 containerd[1990]: time="2026-04-21T10:27:46.368845332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:46.369069 containerd[1990]: time="2026-04-21T10:27:46.368963531Z" level=info msg="RemovePodSandbox \"8ab267fbe2cc1a9a8997a88ffcbb36aa97a65b7854b62d0f0884c1037190da89\" returns successfully" Apr 21 10:27:46.381087 containerd[1990]: time="2026-04-21T10:27:46.381049779Z" level=info msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.423 [WARNING][6769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"40d3332a-bed0-42c4-9601-3b65769379ef", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38", Pod:"calico-apiserver-649f9d4c46-8gqsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie87afa0ff3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.424 [INFO][6769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.424 [INFO][6769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" iface="eth0" netns="" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.424 [INFO][6769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.424 [INFO][6769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.457 [INFO][6777] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.457 [INFO][6777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.458 [INFO][6777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.464 [WARNING][6777] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.464 [INFO][6777] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.466 [INFO][6777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:46.475420 containerd[1990]: 2026-04-21 10:27:46.470 [INFO][6769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.475420 containerd[1990]: time="2026-04-21T10:27:46.473805368Z" level=info msg="TearDown network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" successfully" Apr 21 10:27:46.475420 containerd[1990]: time="2026-04-21T10:27:46.473841594Z" level=info msg="StopPodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" returns successfully" Apr 21 10:27:46.475420 containerd[1990]: time="2026-04-21T10:27:46.474920216Z" level=info msg="RemovePodSandbox for \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" Apr 21 10:27:46.475420 containerd[1990]: time="2026-04-21T10:27:46.474960438Z" level=info msg="Forcibly stopping sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\"" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.549 [WARNING][6791] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"40d3332a-bed0-42c4-9601-3b65769379ef", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"5c72079d8f28059114cdf6647e9d9e27f64ef2e400a91a4d2dba535321378c38", Pod:"calico-apiserver-649f9d4c46-8gqsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie87afa0ff3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.550 [INFO][6791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.550 [INFO][6791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" iface="eth0" netns="" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.550 [INFO][6791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.550 [INFO][6791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.605 [INFO][6798] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.605 [INFO][6798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.605 [INFO][6798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.622 [WARNING][6798] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.623 [INFO][6798] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" HandleID="k8s-pod-network.1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--8gqsh-eth0" Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.629 [INFO][6798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:46.642413 containerd[1990]: 2026-04-21 10:27:46.633 [INFO][6791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df" Apr 21 10:27:46.642413 containerd[1990]: time="2026-04-21T10:27:46.640555997Z" level=info msg="TearDown network for sandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" successfully" Apr 21 10:27:46.842870 containerd[1990]: time="2026-04-21T10:27:46.841394788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:46.842870 containerd[1990]: time="2026-04-21T10:27:46.841496890Z" level=info msg="RemovePodSandbox \"1921076564622a416ba956c17d2ee3efcc7347f8fc4e4b56ae6bddc9b6e845df\" returns successfully" Apr 21 10:27:46.842870 containerd[1990]: time="2026-04-21T10:27:46.842024820Z" level=info msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.908 [WARNING][6812] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0", GenerateName:"calico-kube-controllers-55f8bbbb7b-", Namespace:"calico-system", SelfLink:"", UID:"7a282b52-42ea-480c-9fa0-f3ad8d196a94", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f8bbbb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd", Pod:"calico-kube-controllers-55f8bbbb7b-2qs25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2dd2ab7153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.908 [INFO][6812] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.908 [INFO][6812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" iface="eth0" netns="" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.908 [INFO][6812] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.908 [INFO][6812] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.935 [INFO][6820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.936 [INFO][6820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.936 [INFO][6820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.943 [WARNING][6820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.943 [INFO][6820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.945 [INFO][6820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:46.949873 containerd[1990]: 2026-04-21 10:27:46.947 [INFO][6812] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:46.950635 containerd[1990]: time="2026-04-21T10:27:46.950586240Z" level=info msg="TearDown network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" successfully" Apr 21 10:27:46.950635 containerd[1990]: time="2026-04-21T10:27:46.950619461Z" level=info msg="StopPodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" returns successfully" Apr 21 10:27:46.951447 containerd[1990]: time="2026-04-21T10:27:46.951423590Z" level=info msg="RemovePodSandbox for \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" Apr 21 10:27:46.951514 containerd[1990]: time="2026-04-21T10:27:46.951458328Z" level=info msg="Forcibly stopping sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\"" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:46.995 [WARNING][6836] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0", GenerateName:"calico-kube-controllers-55f8bbbb7b-", Namespace:"calico-system", SelfLink:"", UID:"7a282b52-42ea-480c-9fa0-f3ad8d196a94", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f8bbbb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"941fbde1bd880d46d11ce609e0a369c7bc07e0064f415ae78ddc6e9994256fbd", Pod:"calico-kube-controllers-55f8bbbb7b-2qs25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2dd2ab7153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:46.995 [INFO][6836] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:46.995 [INFO][6836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" iface="eth0" netns="" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:46.996 [INFO][6836] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:46.996 [INFO][6836] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.026 [INFO][6844] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.026 [INFO][6844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.026 [INFO][6844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.035 [WARNING][6844] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.035 [INFO][6844] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" HandleID="k8s-pod-network.8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--55f8bbbb7b--2qs25-eth0" Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.037 [INFO][6844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.042357 containerd[1990]: 2026-04-21 10:27:47.040 [INFO][6836] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c" Apr 21 10:27:47.043462 containerd[1990]: time="2026-04-21T10:27:47.042415982Z" level=info msg="TearDown network for sandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" successfully" Apr 21 10:27:47.058243 containerd[1990]: time="2026-04-21T10:27:47.058161038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:47.058418 containerd[1990]: time="2026-04-21T10:27:47.058286386Z" level=info msg="RemovePodSandbox \"8bfe3022d295390dfbfca7dd514267e7525fbf3fa65f27c8b1cdb9214002a69c\" returns successfully" Apr 21 10:27:47.059835 containerd[1990]: time="2026-04-21T10:27:47.059313360Z" level=info msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.112 [WARNING][6858] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"ce8a55a1-902f-4d92-8943-f6c346590495", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6", Pod:"calico-apiserver-649f9d4c46-7d8jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali78504a00d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.113 [INFO][6858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.113 [INFO][6858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" iface="eth0" netns="" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.113 [INFO][6858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.113 [INFO][6858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.138 [INFO][6865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.138 [INFO][6865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.138 [INFO][6865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.146 [WARNING][6865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.146 [INFO][6865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.148 [INFO][6865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.152803 containerd[1990]: 2026-04-21 10:27:47.150 [INFO][6858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.154933 containerd[1990]: time="2026-04-21T10:27:47.153425273Z" level=info msg="TearDown network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" successfully" Apr 21 10:27:47.154933 containerd[1990]: time="2026-04-21T10:27:47.153470801Z" level=info msg="StopPodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" returns successfully" Apr 21 10:27:47.155280 containerd[1990]: time="2026-04-21T10:27:47.154939501Z" level=info msg="RemovePodSandbox for \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" Apr 21 10:27:47.155280 containerd[1990]: time="2026-04-21T10:27:47.154973575Z" level=info msg="Forcibly stopping sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\"" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.199 [WARNING][6879] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0", GenerateName:"calico-apiserver-649f9d4c46-", Namespace:"calico-system", SelfLink:"", UID:"ce8a55a1-902f-4d92-8943-f6c346590495", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649f9d4c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b90897f9592aa119dda982ddc329cfb34d55ea0e8d40b9f25aa68235510ed8a6", Pod:"calico-apiserver-649f9d4c46-7d8jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali78504a00d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.200 [INFO][6879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.200 [INFO][6879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" iface="eth0" netns="" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.200 [INFO][6879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.200 [INFO][6879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.226 [INFO][6886] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.227 [INFO][6886] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.227 [INFO][6886] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.233 [WARNING][6886] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.233 [INFO][6886] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" HandleID="k8s-pod-network.2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Workload="ip--172--31--16--209-k8s-calico--apiserver--649f9d4c46--7d8jv-eth0" Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.235 [INFO][6886] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.240206 containerd[1990]: 2026-04-21 10:27:47.237 [INFO][6879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0" Apr 21 10:27:47.242054 containerd[1990]: time="2026-04-21T10:27:47.240493741Z" level=info msg="TearDown network for sandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" successfully" Apr 21 10:27:47.246546 containerd[1990]: time="2026-04-21T10:27:47.246492156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:47.246710 containerd[1990]: time="2026-04-21T10:27:47.246588515Z" level=info msg="RemovePodSandbox \"2f155b8cfad7c26dd7aa19a715d2ac61ac506fe0820c27ff981c2d2a33ae15e0\" returns successfully" Apr 21 10:27:47.247771 containerd[1990]: time="2026-04-21T10:27:47.247394809Z" level=info msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.296 [WARNING][6900] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef1622f-119b-4965-be74-eb6954ebbd5e", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2", Pod:"csi-node-driver-fwns8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2be355f7316", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.296 [INFO][6900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.296 [INFO][6900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" iface="eth0" netns="" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.296 [INFO][6900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.296 [INFO][6900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.322 [INFO][6907] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.322 [INFO][6907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.322 [INFO][6907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.329 [WARNING][6907] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.329 [INFO][6907] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.331 [INFO][6907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.336400 containerd[1990]: 2026-04-21 10:27:47.333 [INFO][6900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.338683 containerd[1990]: time="2026-04-21T10:27:47.336447300Z" level=info msg="TearDown network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" successfully" Apr 21 10:27:47.338683 containerd[1990]: time="2026-04-21T10:27:47.336475934Z" level=info msg="StopPodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" returns successfully" Apr 21 10:27:47.338683 containerd[1990]: time="2026-04-21T10:27:47.337646881Z" level=info msg="RemovePodSandbox for \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" Apr 21 10:27:47.338683 containerd[1990]: time="2026-04-21T10:27:47.337680393Z" level=info msg="Forcibly stopping sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\"" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.377 [WARNING][6921] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef1622f-119b-4965-be74-eb6954ebbd5e", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"efbb5d0e0f032ac76a5c8430e31dfecb8af94ce56498dcbcf53e9a0487aa9bf2", Pod:"csi-node-driver-fwns8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2be355f7316", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.377 [INFO][6921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.377 [INFO][6921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" iface="eth0" netns="" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.377 [INFO][6921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.378 [INFO][6921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.404 [INFO][6929] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.405 [INFO][6929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.405 [INFO][6929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.412 [WARNING][6929] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.412 [INFO][6929] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" HandleID="k8s-pod-network.70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Workload="ip--172--31--16--209-k8s-csi--node--driver--fwns8-eth0" Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.416 [INFO][6929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.422391 containerd[1990]: 2026-04-21 10:27:47.418 [INFO][6921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600" Apr 21 10:27:47.423477 containerd[1990]: time="2026-04-21T10:27:47.422371454Z" level=info msg="TearDown network for sandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" successfully" Apr 21 10:27:47.460414 containerd[1990]: time="2026-04-21T10:27:47.459488606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:47.460414 containerd[1990]: time="2026-04-21T10:27:47.459841056Z" level=info msg="RemovePodSandbox \"70b50d3a904fd10c5af2174ce6d03ec6cdd64e7ea08976844c4c95e27e82e600\" returns successfully" Apr 21 10:27:47.460645 containerd[1990]: time="2026-04-21T10:27:47.460622584Z" level=info msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.555 [WARNING][6943] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee27081f-d3fb-48c1-8c12-8d86f1601923", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8", Pod:"goldmane-9f7667bb8-sxk82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1171b3531ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.556 [INFO][6943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.556 [INFO][6943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" iface="eth0" netns="" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.556 [INFO][6943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.556 [INFO][6943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.583 [INFO][6951] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.583 [INFO][6951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.583 [INFO][6951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.590 [WARNING][6951] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.590 [INFO][6951] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.592 [INFO][6951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.597888 containerd[1990]: 2026-04-21 10:27:47.594 [INFO][6943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.599314 containerd[1990]: time="2026-04-21T10:27:47.598894598Z" level=info msg="TearDown network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" successfully" Apr 21 10:27:47.599314 containerd[1990]: time="2026-04-21T10:27:47.598949712Z" level=info msg="StopPodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" returns successfully" Apr 21 10:27:47.599858 containerd[1990]: time="2026-04-21T10:27:47.599836496Z" level=info msg="RemovePodSandbox for \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" Apr 21 10:27:47.599991 containerd[1990]: time="2026-04-21T10:27:47.599971477Z" level=info msg="Forcibly stopping sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\"" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.645 [WARNING][6965] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee27081f-d3fb-48c1-8c12-8d86f1601923", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"0b06360fe1cf7ed3603bede00773e12814d781b728251e2cf6f14c663226edc8", Pod:"goldmane-9f7667bb8-sxk82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1171b3531ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.645 [INFO][6965] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.645 [INFO][6965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" iface="eth0" netns="" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.645 [INFO][6965] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.645 [INFO][6965] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.678 [INFO][6973] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.678 [INFO][6973] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.678 [INFO][6973] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.693 [WARNING][6973] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.693 [INFO][6973] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" HandleID="k8s-pod-network.46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Workload="ip--172--31--16--209-k8s-goldmane--9f7667bb8--sxk82-eth0" Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.695 [INFO][6973] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.702091 containerd[1990]: 2026-04-21 10:27:47.698 [INFO][6965] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a" Apr 21 10:27:47.702091 containerd[1990]: time="2026-04-21T10:27:47.702053984Z" level=info msg="TearDown network for sandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" successfully" Apr 21 10:27:47.709067 containerd[1990]: time="2026-04-21T10:27:47.708997723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:47.709327 containerd[1990]: time="2026-04-21T10:27:47.709146719Z" level=info msg="RemovePodSandbox \"46883bffc89776967dbdd8764dbb449afb2d61a4038fc4a77fc2e330420b0a4a\" returns successfully" Apr 21 10:27:47.710352 containerd[1990]: time="2026-04-21T10:27:47.709934258Z" level=info msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.762 [WARNING][6987] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65", Pod:"coredns-7d764666f9-lw96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d98ec5fd1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.763 [INFO][6987] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.763 [INFO][6987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" iface="eth0" netns="" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.763 [INFO][6987] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.763 [INFO][6987] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.804 [INFO][6994] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.806 [INFO][6994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.806 [INFO][6994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.816 [WARNING][6994] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.816 [INFO][6994] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.818 [INFO][6994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.825712 containerd[1990]: 2026-04-21 10:27:47.822 [INFO][6987] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.826384 containerd[1990]: time="2026-04-21T10:27:47.826184011Z" level=info msg="TearDown network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" successfully" Apr 21 10:27:47.826384 containerd[1990]: time="2026-04-21T10:27:47.826219407Z" level=info msg="StopPodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" returns successfully" Apr 21 10:27:47.826914 containerd[1990]: time="2026-04-21T10:27:47.826884665Z" level=info msg="RemovePodSandbox for \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" Apr 21 10:27:47.827146 containerd[1990]: time="2026-04-21T10:27:47.826918160Z" level=info msg="Forcibly stopping sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\"" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.877 [WARNING][7008] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b63d72e0-ee8f-4d5f-bf38-cc48f379bdeb", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"c89fd3392c0cff6bd1955437e3c62fd900475ad46f1f2d8d6f9b2287cac5bf65", Pod:"coredns-7d764666f9-lw96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d98ec5fd1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.877 [INFO][7008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.877 [INFO][7008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" iface="eth0" netns="" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.877 [INFO][7008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.877 [INFO][7008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.904 [INFO][7015] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.905 [INFO][7015] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.905 [INFO][7015] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.912 [WARNING][7015] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.912 [INFO][7015] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" HandleID="k8s-pod-network.15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Workload="ip--172--31--16--209-k8s-coredns--7d764666f9--lw96w-eth0" Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.914 [INFO][7015] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:27:47.919944 containerd[1990]: 2026-04-21 10:27:47.917 [INFO][7008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c" Apr 21 10:27:47.920681 containerd[1990]: time="2026-04-21T10:27:47.919994152Z" level=info msg="TearDown network for sandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" successfully" Apr 21 10:27:47.928726 containerd[1990]: time="2026-04-21T10:27:47.928472999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:27:47.928726 containerd[1990]: time="2026-04-21T10:27:47.928571179Z" level=info msg="RemovePodSandbox \"15a2f8765554c6093d0b5dc4217e86d933fe6f60f4f9d4c68fb50764239d133c\" returns successfully" Apr 21 10:27:50.639233 systemd[1]: Started sshd@19-172.31.16.209:22-50.85.169.122:40944.service - OpenSSH per-connection server daemon (50.85.169.122:40944). Apr 21 10:27:51.763598 sshd[7023]: Accepted publickey for core from 50.85.169.122 port 40944 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:27:51.769060 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:27:51.776796 systemd-logind[1964]: New session 20 of user core. Apr 21 10:27:51.779221 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:27:53.695837 sshd[7023]: pam_unix(sshd:session): session closed for user core Apr 21 10:27:53.700997 systemd[1]: sshd@19-172.31.16.209:22-50.85.169.122:40944.service: Deactivated successfully. Apr 21 10:27:53.703690 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:27:53.704831 systemd-logind[1964]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:27:53.706251 systemd-logind[1964]: Removed session 20. Apr 21 10:28:04.410327 systemd[1]: run-containerd-runc-k8s.io-cb692d3d4f01375dcb30ffa867cd59f07745455af14609f05bb61567431e8c36-runc.1mU2jm.mount: Deactivated successfully. Apr 21 10:28:43.288097 systemd[1]: cri-containerd-8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2.scope: Deactivated successfully. Apr 21 10:28:43.288428 systemd[1]: cri-containerd-8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2.scope: Consumed 3.664s CPU time, 15.5M memory peak, 0B memory swap peak. Apr 21 10:28:43.467583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2-rootfs.mount: Deactivated successfully. Apr 21 10:28:43.515768 containerd[1990]: time="2026-04-21T10:28:43.490640554Z" level=info msg="shim disconnected" id=8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2 namespace=k8s.io Apr 21 10:28:43.522256 containerd[1990]: time="2026-04-21T10:28:43.515753746Z" level=warning msg="cleaning up after shim disconnected" id=8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2 namespace=k8s.io Apr 21 10:28:43.522256 containerd[1990]: time="2026-04-21T10:28:43.515796072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:28:43.990179 kubelet[3197]: I0421 10:28:43.973670 3197 scope.go:122] "RemoveContainer" containerID="8395d5a2346eab716231891d433dbbba37174061185f89d5930929b3cffac4a2" Apr 21 10:28:44.052645 systemd[1]: cri-containerd-dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8.scope: Deactivated successfully. Apr 21 10:28:44.053285 systemd[1]: cri-containerd-dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8.scope: Consumed 11.981s CPU time. Apr 21 10:28:44.094893 containerd[1990]: time="2026-04-21T10:28:44.090990499Z" level=info msg="shim disconnected" id=dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8 namespace=k8s.io Apr 21 10:28:44.094893 containerd[1990]: time="2026-04-21T10:28:44.091080957Z" level=warning msg="cleaning up after shim disconnected" id=dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8 namespace=k8s.io Apr 21 10:28:44.094893 containerd[1990]: time="2026-04-21T10:28:44.091093595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:28:44.095456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8-rootfs.mount: Deactivated successfully. Apr 21 10:28:44.143145 containerd[1990]: time="2026-04-21T10:28:44.143093106Z" level=info msg="CreateContainer within sandbox \"a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:28:44.226083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011324166.mount: Deactivated successfully. Apr 21 10:28:44.232275 containerd[1990]: time="2026-04-21T10:28:44.232208268Z" level=info msg="CreateContainer within sandbox \"a67954644fa5324f995a229563f49d143423c680cf1c5768256482d8dd66209c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8bb79dfd812c1b67f3fb6c2b47bcc2dcfd1fbd9be33a2afb89bdae7d84f934af\"" Apr 21 10:28:44.233076 containerd[1990]: time="2026-04-21T10:28:44.232930697Z" level=info msg="StartContainer for \"8bb79dfd812c1b67f3fb6c2b47bcc2dcfd1fbd9be33a2afb89bdae7d84f934af\"" Apr 21 10:28:44.295048 systemd[1]: Started cri-containerd-8bb79dfd812c1b67f3fb6c2b47bcc2dcfd1fbd9be33a2afb89bdae7d84f934af.scope - libcontainer container 8bb79dfd812c1b67f3fb6c2b47bcc2dcfd1fbd9be33a2afb89bdae7d84f934af. Apr 21 10:28:44.415550 containerd[1990]: time="2026-04-21T10:28:44.415512876Z" level=info msg="StartContainer for \"8bb79dfd812c1b67f3fb6c2b47bcc2dcfd1fbd9be33a2afb89bdae7d84f934af\" returns successfully" Apr 21 10:28:44.951261 kubelet[3197]: I0421 10:28:44.951233 3197 scope.go:122] "RemoveContainer" containerID="dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8" Apr 21 10:28:44.974929 containerd[1990]: time="2026-04-21T10:28:44.974611172Z" level=info msg="CreateContainer within sandbox \"4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:28:45.017806 containerd[1990]: time="2026-04-21T10:28:45.014580220Z" level=info msg="CreateContainer within sandbox \"4832c0f8733e2944a8fa249b230cac83868699bf543aac7722a93bf14973f7f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff\"" Apr 21 10:28:45.021102 containerd[1990]: time="2026-04-21T10:28:45.018707638Z" level=info msg="StartContainer for \"2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff\"" Apr 21 10:28:45.027826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871073910.mount: Deactivated successfully. Apr 21 10:28:45.083026 systemd[1]: Started cri-containerd-2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff.scope - libcontainer container 2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff. Apr 21 10:28:45.139320 containerd[1990]: time="2026-04-21T10:28:45.139275651Z" level=info msg="StartContainer for \"2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff\" returns successfully" Apr 21 10:28:48.221798 kubelet[3197]: E0421 10:28:48.221730 3197 controller.go:251] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-209)" Apr 21 10:28:49.635574 systemd[1]: cri-containerd-ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366.scope: Deactivated successfully. Apr 21 10:28:49.635931 systemd[1]: cri-containerd-ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366.scope: Consumed 1.852s CPU time, 14.5M memory peak, 0B memory swap peak. Apr 21 10:28:49.666723 containerd[1990]: time="2026-04-21T10:28:49.666651493Z" level=info msg="shim disconnected" id=ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366 namespace=k8s.io Apr 21 10:28:49.667344 containerd[1990]: time="2026-04-21T10:28:49.666858147Z" level=warning msg="cleaning up after shim disconnected" id=ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366 namespace=k8s.io Apr 21 10:28:49.667344 containerd[1990]: time="2026-04-21T10:28:49.666877118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:28:49.670751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366-rootfs.mount: Deactivated successfully. Apr 21 10:28:49.986191 kubelet[3197]: I0421 10:28:49.985340 3197 scope.go:122] "RemoveContainer" containerID="ad35e246c2840baecb6523c9a279eeb5f4779f85501d734fd4f5c7ee914c4366" Apr 21 10:28:49.988335 containerd[1990]: time="2026-04-21T10:28:49.988300545Z" level=info msg="CreateContainer within sandbox \"ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:28:50.019614 containerd[1990]: time="2026-04-21T10:28:50.019570810Z" level=info msg="CreateContainer within sandbox \"ad7ed78c3b89fd660e3ca0901eb5eb660e80bcfa05ce30ba0c4f83b1934c21a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3bc60e42d1ff04c4de51d5727c03124ed3ae53b917acd8255fc74c8806810aff\"" Apr 21 10:28:50.021539 containerd[1990]: time="2026-04-21T10:28:50.020267886Z" level=info msg="StartContainer for \"3bc60e42d1ff04c4de51d5727c03124ed3ae53b917acd8255fc74c8806810aff\"" Apr 21 10:28:50.066026 systemd[1]: Started cri-containerd-3bc60e42d1ff04c4de51d5727c03124ed3ae53b917acd8255fc74c8806810aff.scope - libcontainer container 3bc60e42d1ff04c4de51d5727c03124ed3ae53b917acd8255fc74c8806810aff. Apr 21 10:28:50.116504 containerd[1990]: time="2026-04-21T10:28:50.116459020Z" level=info msg="StartContainer for \"3bc60e42d1ff04c4de51d5727c03124ed3ae53b917acd8255fc74c8806810aff\" returns successfully" Apr 21 10:28:56.841140 systemd[1]: cri-containerd-2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff.scope: Deactivated successfully. Apr 21 10:28:56.866733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff-rootfs.mount: Deactivated successfully. Apr 21 10:28:56.885681 containerd[1990]: time="2026-04-21T10:28:56.885608718Z" level=info msg="shim disconnected" id=2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff namespace=k8s.io Apr 21 10:28:56.885681 containerd[1990]: time="2026-04-21T10:28:56.885667175Z" level=warning msg="cleaning up after shim disconnected" id=2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff namespace=k8s.io Apr 21 10:28:56.885681 containerd[1990]: time="2026-04-21T10:28:56.885680864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:28:56.997627 kubelet[3197]: I0421 10:28:56.995637 3197 scope.go:122] "RemoveContainer" containerID="dd377bea3275700b68513cae23817359618c7128a11d1a1cdbf8c821578c94b8" Apr 21 10:28:56.997627 kubelet[3197]: I0421 10:28:56.996527 3197 scope.go:122] "RemoveContainer" containerID="2a9500adf8aad26f716d53c977561e1d6bc4f548a5661294f5d643045e93abff" Apr 21 10:28:57.006611 kubelet[3197]: E0421 10:28:57.006539 3197 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-rqgpz_tigera-operator(1443aaa4-e9de-44db-887b-690ca39cc085)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-rqgpz" podUID="1443aaa4-e9de-44db-887b-690ca39cc085"