Apr 21 10:42:51.939149 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:42:51.939186 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:42:51.939206 kernel: BIOS-provided physical RAM map: Apr 21 10:42:51.939217 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:42:51.939229 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 21 10:42:51.939239 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 21 10:42:51.939253 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 21 10:42:51.939266 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 21 10:42:51.939277 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 21 10:42:51.939292 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 21 10:42:51.939304 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 21 10:42:51.939316 kernel: NX (Execute Disable) protection: active Apr 21 10:42:51.939329 kernel: APIC: Static calls initialized Apr 21 10:42:51.939341 kernel: efi: EFI v2.7 by EDK II Apr 21 10:42:51.939356 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 21 10:42:51.939372 kernel: SMBIOS 2.7 present. Apr 21 10:42:51.939385 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 21 10:42:51.939397 kernel: Hypervisor detected: KVM Apr 21 10:42:51.939409 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:42:51.939420 kernel: kvm-clock: using sched offset of 8422232079 cycles Apr 21 10:42:51.939431 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:42:51.939443 kernel: tsc: Detected 2499.996 MHz processor Apr 21 10:42:51.939455 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:42:51.939467 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:42:51.939479 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 21 10:42:51.939495 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:42:51.939506 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:42:51.939517 kernel: Using GB pages for direct mapping Apr 21 10:42:51.939529 kernel: Secure boot disabled Apr 21 10:42:51.939541 kernel: ACPI: Early table checksum verification disabled Apr 21 10:42:51.939553 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 21 10:42:51.939567 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:42:51.939581 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:42:51.939593 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:42:51.939610 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 21 10:42:51.939622 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 21 10:42:51.939634 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:42:51.939648 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:42:51.939661 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 21 10:42:51.939690 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 21 10:42:51.939709 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:42:51.939727 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:42:51.941069 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 21 10:42:51.941087 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 21 10:42:51.941101 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 21 10:42:51.941114 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 21 10:42:51.941128 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 21 10:42:51.941141 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 21 10:42:51.941160 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 21 10:42:51.941174 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 21 10:42:51.941187 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 21 10:42:51.941200 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 21 10:42:51.941213 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 21 10:42:51.941226 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 21 10:42:51.941240 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 10:42:51.941253 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 10:42:51.941267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 21 10:42:51.941284 kernel: NUMA: Initialized distance table, cnt=1 Apr 21 10:42:51.941297 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 21 10:42:51.941310 kernel: Zone ranges: Apr 21 10:42:51.941324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:42:51.941338 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 21 10:42:51.941351 kernel: Normal empty Apr 21 10:42:51.941364 kernel: Movable zone start for each node Apr 21 10:42:51.941378 kernel: Early memory node ranges Apr 21 10:42:51.941391 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:42:51.941408 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 21 10:42:51.941421 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 21 10:42:51.941434 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 21 10:42:51.941448 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:42:51.941461 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:42:51.941475 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:42:51.941489 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 21 10:42:51.941502 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 21 10:42:51.941515 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:42:51.941529 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 21 10:42:51.941546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:42:51.941560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:42:51.941573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:42:51.941587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:42:51.941600 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:42:51.941613 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:42:51.941627 kernel: TSC deadline timer available Apr 21 10:42:51.941640 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:42:51.941653 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:42:51.941670 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 21 10:42:51.943212 kernel: Booting paravirtualized kernel on KVM Apr 21 10:42:51.943229 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:42:51.943245 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:42:51.943260 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:42:51.943275 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:42:51.943289 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:42:51.943304 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:42:51.943319 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:42:51.943342 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:42:51.943358 kernel: random: crng init done Apr 21 10:42:51.943372 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:42:51.943387 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 10:42:51.943402 kernel: Fallback order for Node 0: 0 Apr 21 10:42:51.943416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 21 10:42:51.943431 kernel: Policy zone: DMA32 Apr 21 10:42:51.943446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:42:51.943465 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 21 10:42:51.943480 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:42:51.943495 kernel: Kernel/User page tables isolation: enabled Apr 21 10:42:51.943510 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:42:51.943525 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:42:51.943540 kernel: Dynamic Preempt: voluntary Apr 21 10:42:51.943554 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:42:51.943575 kernel: rcu: RCU event tracing is enabled. Apr 21 10:42:51.943590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:42:51.943608 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:42:51.943623 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:42:51.943637 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:42:51.943652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:42:51.943667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:42:51.943701 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:42:51.943716 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:42:51.943747 kernel: Console: colour dummy device 80x25 Apr 21 10:42:51.943761 kernel: printk: console [tty0] enabled Apr 21 10:42:51.943773 kernel: printk: console [ttyS0] enabled Apr 21 10:42:51.943787 kernel: ACPI: Core revision 20230628 Apr 21 10:42:51.943801 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 21 10:42:51.943820 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:42:51.943835 kernel: x2apic enabled Apr 21 10:42:51.943849 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:42:51.943864 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:42:51.943879 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 21 10:42:51.943896 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 10:42:51.943911 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 10:42:51.943924 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:42:51.943937 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:42:51.943950 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:42:51.943964 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:42:51.943978 kernel: RETBleed: Vulnerable Apr 21 10:42:51.943992 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:42:51.944005 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:42:51.944019 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:42:51.944035 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:42:51.944049 kernel: active return thunk: its_return_thunk Apr 21 10:42:51.944062 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:42:51.944075 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:42:51.944088 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:42:51.944102 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:42:51.944115 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 21 10:42:51.944129 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 21 10:42:51.944143 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:42:51.944157 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:42:51.944170 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:42:51.944187 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:42:51.944201 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:42:51.944214 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 21 10:42:51.944227 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 21 10:42:51.944241 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 21 10:42:51.944254 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 21 10:42:51.944268 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 21 10:42:51.944281 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 21 10:42:51.944295 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 21 10:42:51.944308 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:42:51.944322 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:42:51.944338 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:42:51.944352 kernel: landlock: Up and running. Apr 21 10:42:51.944366 kernel: SELinux: Initializing. Apr 21 10:42:51.944380 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:42:51.944393 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:42:51.944407 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 10:42:51.944421 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:42:51.944436 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:42:51.944449 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:42:51.944463 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 10:42:51.944480 kernel: signal: max sigframe size: 3632 Apr 21 10:42:51.944493 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:42:51.944507 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:42:51.944521 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:42:51.944535 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:42:51.944548 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:42:51.944562 kernel: .... node #0, CPUs: #1 Apr 21 10:42:51.944576 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 21 10:42:51.944591 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 10:42:51.944609 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:42:51.944623 kernel: smpboot: Max logical packages: 1 Apr 21 10:42:51.944637 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 21 10:42:51.944651 kernel: devtmpfs: initialized Apr 21 10:42:51.944665 kernel: x86/mm: Memory block size: 128MB Apr 21 10:42:51.946734 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 21 10:42:51.946756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:42:51.946771 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:42:51.946783 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:42:51.946802 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:42:51.946816 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:42:51.946831 kernel: audit: type=2000 audit(1776768171.329:1): state=initialized audit_enabled=0 res=1 Apr 21 10:42:51.946845 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:42:51.946858 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:42:51.946871 kernel: cpuidle: using governor menu Apr 21 10:42:51.946886 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:42:51.946903 kernel: dca service started, version 1.12.1 Apr 21 10:42:51.946917 kernel: PCI: Using configuration type 1 for base access Apr 21 10:42:51.946935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:42:51.946948 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:42:51.946962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:42:51.946977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:42:51.946991 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:42:51.947006 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:42:51.947021 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:42:51.947037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:42:51.947052 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 21 10:42:51.947071 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:42:51.947086 kernel: ACPI: Interpreter enabled Apr 21 10:42:51.947102 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:42:51.947118 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:42:51.947134 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:42:51.947149 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:42:51.947165 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 21 10:42:51.947181 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:42:51.947412 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:42:51.947575 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 21 10:42:51.947751 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 21 10:42:51.947774 kernel: acpiphp: Slot [3] registered Apr 21 10:42:51.947791 kernel: acpiphp: Slot [4] registered Apr 21 10:42:51.947807 kernel: acpiphp: Slot [5] registered Apr 21 10:42:51.947824 kernel: acpiphp: Slot [6] registered Apr 21 10:42:51.947841 kernel: acpiphp: Slot [7] registered Apr 21 10:42:51.947863 kernel: acpiphp: Slot [8] registered Apr 21 10:42:51.947879 kernel: acpiphp: Slot [9] registered Apr 21 10:42:51.947896 kernel: acpiphp: Slot [10] registered Apr 21 10:42:51.947914 kernel: acpiphp: Slot [11] registered Apr 21 10:42:51.947930 kernel: acpiphp: Slot [12] registered Apr 21 10:42:51.947947 kernel: acpiphp: Slot [13] registered Apr 21 10:42:51.947964 kernel: acpiphp: Slot [14] registered Apr 21 10:42:51.947980 kernel: acpiphp: Slot [15] registered Apr 21 10:42:51.947996 kernel: acpiphp: Slot [16] registered Apr 21 10:42:51.948013 kernel: acpiphp: Slot [17] registered Apr 21 10:42:51.948034 kernel: acpiphp: Slot [18] registered Apr 21 10:42:51.948051 kernel: acpiphp: Slot [19] registered Apr 21 10:42:51.948068 kernel: acpiphp: Slot [20] registered Apr 21 10:42:51.948085 kernel: acpiphp: Slot [21] registered Apr 21 10:42:51.948104 kernel: acpiphp: Slot [22] registered Apr 21 10:42:51.948120 kernel: acpiphp: Slot [23] registered Apr 21 10:42:51.948137 kernel: acpiphp: Slot [24] registered Apr 21 10:42:51.948153 kernel: acpiphp: Slot [25] registered Apr 21 10:42:51.948169 kernel: acpiphp: Slot [26] registered Apr 21 10:42:51.948190 kernel: acpiphp: Slot [27] registered Apr 21 10:42:51.948208 kernel: acpiphp: Slot [28] registered Apr 21 10:42:51.948224 kernel: acpiphp: Slot [29] registered Apr 21 10:42:51.948241 kernel: acpiphp: Slot [30] registered Apr 21 10:42:51.948258 kernel: acpiphp: Slot [31] registered Apr 21 10:42:51.948275 kernel: PCI host bridge to bus 0000:00 Apr 21 10:42:51.948444 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:42:51.948586 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:42:51.951806 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:42:51.951963 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 21 10:42:51.952095 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 21 10:42:51.952216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:42:51.952374 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 21 10:42:51.952534 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 21 10:42:51.952710 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 21 10:42:51.952861 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 21 10:42:51.952998 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 21 10:42:51.953139 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 21 10:42:51.953285 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 21 10:42:51.953434 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 21 10:42:51.953579 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 21 10:42:51.954801 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 21 10:42:51.954980 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 21 10:42:51.955139 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 21 10:42:51.955294 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:42:51.955451 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 21 10:42:51.955605 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:42:51.956817 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:42:51.956981 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 21 10:42:51.957135 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:42:51.957279 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 21 10:42:51.957302 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:42:51.957320 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:42:51.957337 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:42:51.957354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:42:51.957371 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 21 10:42:51.957392 kernel: iommu: Default domain type: Translated Apr 21 10:42:51.957410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:42:51.957427 kernel: efivars: Registered efivars operations Apr 21 10:42:51.957444 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:42:51.957461 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:42:51.957478 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 21 10:42:51.957494 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 21 10:42:51.957631 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 21 10:42:51.958857 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 21 10:42:51.959040 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:42:51.959066 kernel: vgaarb: loaded Apr 21 10:42:51.959084 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 21 10:42:51.959102 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 21 10:42:51.959119 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:42:51.959136 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:42:51.959153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:42:51.959170 kernel: pnp: PnP ACPI init Apr 21 10:42:51.959194 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:42:51.959212 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:42:51.959229 kernel: NET: Registered PF_INET protocol family Apr 21 10:42:51.959246 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:42:51.959262 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 21 10:42:51.959276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:42:51.959293 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 10:42:51.959309 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 21 10:42:51.959324 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 21 10:42:51.959344 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:42:51.959359 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:42:51.959375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:42:51.959390 kernel: NET: Registered PF_XDP protocol family Apr 21 10:42:51.959539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:42:51.960708 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:42:51.960897 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:42:51.961045 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 21 10:42:51.961190 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 21 10:42:51.961372 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 21 10:42:51.961399 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:42:51.961420 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:42:51.961440 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:42:51.961461 kernel: clocksource: Switched to clocksource tsc Apr 21 10:42:51.961480 kernel: Initialise system trusted keyrings Apr 21 10:42:51.961501 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 21 10:42:51.961521 kernel: Key type asymmetric registered Apr 21 10:42:51.961543 kernel: Asymmetric key parser 'x509' registered Apr 21 10:42:51.961561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:42:51.961578 kernel: io scheduler mq-deadline registered Apr 21 10:42:51.961593 kernel: io scheduler kyber registered Apr 21 10:42:51.961607 kernel: io scheduler bfq registered Apr 21 10:42:51.961621 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:42:51.961636 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:42:51.961653 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:42:51.961668 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:42:51.965399 kernel: i8042: Warning: Keylock active Apr 21 10:42:51.965417 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:42:51.965433 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:42:51.965632 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 21 10:42:51.965819 kernel: rtc_cmos 00:00: registered as rtc0 Apr 21 10:42:51.966002 kernel: rtc_cmos 00:00: setting system clock to 2026-04-21T10:42:51 UTC (1776768171) Apr 21 10:42:51.966174 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 21 10:42:51.966192 kernel: intel_pstate: CPU model not supported Apr 21 10:42:51.966218 kernel: efifb: probing for efifb Apr 21 10:42:51.966235 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 21 10:42:51.966251 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 21 10:42:51.966267 kernel: efifb: scrolling: redraw Apr 21 10:42:51.966285 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:42:51.966302 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:42:51.966319 kernel: fb0: EFI VGA frame buffer device Apr 21 10:42:51.966335 kernel: pstore: Using crash dump compression: deflate Apr 21 10:42:51.966353 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:42:51.966376 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:42:51.966393 kernel: Segment Routing with IPv6 Apr 21 10:42:51.966410 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:42:51.966426 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:42:51.966444 kernel: Key type dns_resolver registered Apr 21 10:42:51.966462 kernel: IPI shorthand broadcast: enabled Apr 21 10:42:51.966527 kernel: sched_clock: Marking stable (481002981, 127774797)->(677416167, -68638389) Apr 21 10:42:51.966549 kernel: registered taskstats version 1 Apr 21 10:42:51.966568 kernel: Loading compiled-in X.509 certificates Apr 21 10:42:51.966590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:42:51.966606 kernel: Key type .fscrypt registered Apr 21 10:42:51.966623 kernel: Key type fscrypt-provisioning registered Apr 21 10:42:51.966640 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:42:51.966657 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:42:51.966687 kernel: ima: No architecture policies found Apr 21 10:42:51.966704 kernel: clk: Disabling unused clocks Apr 21 10:42:51.966720 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:42:51.966738 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:42:51.966761 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:42:51.966780 kernel: Run /init as init process Apr 21 10:42:51.966798 kernel: with arguments: Apr 21 10:42:51.966816 kernel: /init Apr 21 10:42:51.966834 kernel: with environment: Apr 21 10:42:51.966851 kernel: HOME=/ Apr 21 10:42:51.966868 kernel: TERM=linux Apr 21 10:42:51.966885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:42:51.966908 systemd[1]: Detected virtualization amazon. Apr 21 10:42:51.966926 systemd[1]: Detected architecture x86-64. Apr 21 10:42:51.966943 systemd[1]: Running in initrd. Apr 21 10:42:51.966959 systemd[1]: No hostname configured, using default hostname. Apr 21 10:42:51.966975 systemd[1]: Hostname set to . Apr 21 10:42:51.966992 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:42:51.967007 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:42:51.967025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:42:51.967044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:42:51.967062 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:42:51.967078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:42:51.967095 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:42:51.967113 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:42:51.967134 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:42:51.967152 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:42:51.967170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:42:51.967188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:42:51.967206 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:42:51.967223 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:42:51.967241 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:42:51.967261 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:42:51.967279 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:42:51.967298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:42:51.967316 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:42:51.967333 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:42:51.967348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:42:51.967363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:42:51.967378 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:42:51.967397 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:42:51.967418 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:42:51.967436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:42:51.967454 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:42:51.967471 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:42:51.967485 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:42:51.967499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:42:51.967549 systemd-journald[179]: Collecting audit messages is disabled. Apr 21 10:42:51.967588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:42:51.967604 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:42:51.967619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:42:51.967635 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:42:51.967655 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:42:51.967673 systemd-journald[179]: Journal started Apr 21 10:42:51.967729 systemd-journald[179]: Runtime Journal (/run/log/journal/ec223a508612d4ad4d9956a9ab76c5b2) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:42:51.974711 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:42:51.973847 systemd-modules-load[180]: Inserted module 'overlay' Apr 21 10:42:51.977434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:42:51.983857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:42:51.992998 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:42:51.996830 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:42:52.004716 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:42:52.012185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:42:52.023762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:42:52.035461 kernel: Bridge firewalling registered Apr 21 10:42:52.034634 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 21 10:42:52.036640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:42:52.037639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:42:52.046928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:42:52.048711 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:42:52.049594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:42:52.060161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:42:52.062010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:42:52.074670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:42:52.076943 dracut-cmdline[212]: dracut-dracut-053 Apr 21 10:42:52.080900 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:42:52.114947 systemd-resolved[218]: Positive Trust Anchors: Apr 21 10:42:52.114968 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:42:52.115032 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:42:52.124930 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 21 10:42:52.126376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:42:52.127097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:42:52.171721 kernel: SCSI subsystem initialized Apr 21 10:42:52.181711 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:42:52.192705 kernel: iscsi: registered transport (tcp) Apr 21 10:42:52.214730 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:42:52.214817 kernel: QLogic iSCSI HBA Driver Apr 21 10:42:52.252599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:42:52.260874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:42:52.286311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:42:52.286388 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:42:52.286411 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:42:52.329719 kernel: raid6: avx512x4 gen() 17450 MB/s Apr 21 10:42:52.347704 kernel: raid6: avx512x2 gen() 18065 MB/s Apr 21 10:42:52.365704 kernel: raid6: avx512x1 gen() 18147 MB/s Apr 21 10:42:52.383700 kernel: raid6: avx2x4 gen() 18030 MB/s Apr 21 10:42:52.401704 kernel: raid6: avx2x2 gen() 17998 MB/s Apr 21 10:42:52.419904 kernel: raid6: avx2x1 gen() 13655 MB/s Apr 21 10:42:52.419961 kernel: raid6: using algorithm avx512x1 gen() 18147 MB/s Apr 21 10:42:52.438896 kernel: raid6: .... xor() 21778 MB/s, rmw enabled Apr 21 10:42:52.438950 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:42:52.460724 kernel: xor: automatically using best checksumming function avx Apr 21 10:42:52.621714 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:42:52.631568 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:42:52.636883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:42:52.659811 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 21 10:42:52.665159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:42:52.672911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:42:52.699122 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 21 10:42:52.731994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:42:52.735964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:42:52.791672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:42:52.799906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:42:52.836459 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:42:52.839909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:42:52.842261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:42:52.843867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:42:52.851286 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:42:52.882090 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:42:52.906106 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:42:52.912711 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:42:52.912987 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:42:52.914994 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:42:52.915886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:42:52.919156 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:42:52.920301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:42:52.920511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:42:52.921141 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:42:52.932041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:42:52.945022 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 21 10:42:52.945309 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:42:52.946043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:42:52.946894 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:42:52.956690 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:be:69:2a:71:e3 Apr 21 10:42:52.956977 kernel: AES CTR mode by8 optimization enabled Apr 21 10:42:52.958124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:42:52.962025 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:42:52.979730 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:42:52.979990 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 21 10:42:52.988560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:42:52.995040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:42:53.003329 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:42:53.009188 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:42:53.009252 kernel: GPT:9289727 != 33554431 Apr 21 10:42:53.012830 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:42:53.012891 kernel: GPT:9289727 != 33554431 Apr 21 10:42:53.014073 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:42:53.016085 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:42:53.019775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:42:53.255838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:42:53.268702 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (443) Apr 21 10:42:53.290716 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (447) Apr 21 10:42:53.311578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:42:53.328917 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:42:53.329661 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:42:53.335929 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:42:53.353265 disk-uuid[619]: Primary Header is updated. Apr 21 10:42:53.353265 disk-uuid[619]: Secondary Entries is updated. Apr 21 10:42:53.353265 disk-uuid[619]: Secondary Header is updated. Apr 21 10:42:53.429864 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:42:54.366701 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:42:54.368657 disk-uuid[620]: The operation has completed successfully. Apr 21 10:42:54.497651 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:42:54.497807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:42:54.527877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:42:54.531408 sh[976]: Success Apr 21 10:42:54.597911 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 10:42:54.816052 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:42:54.824818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:42:54.828148 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:42:54.860147 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:42:54.860228 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:42:54.860251 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:42:54.863024 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:42:54.865637 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:42:55.282717 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:42:55.427354 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:42:55.428658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:42:55.433907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:42:55.436863 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:42:55.458702 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:42:55.458775 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:42:55.463020 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:42:55.532241 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:42:55.534425 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:42:55.544019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:42:55.548267 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:42:55.549485 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:42:55.555178 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:42:55.558870 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:42:55.579722 systemd-networkd[1164]: lo: Link UP Apr 21 10:42:55.579734 systemd-networkd[1164]: lo: Gained carrier Apr 21 10:42:55.581444 systemd-networkd[1164]: Enumeration completed Apr 21 10:42:55.581575 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:42:55.581946 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:42:55.581951 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:42:55.582850 systemd[1]: Reached target network.target - Network. Apr 21 10:42:55.586284 systemd-networkd[1164]: eth0: Link UP Apr 21 10:42:55.586289 systemd-networkd[1164]: eth0: Gained carrier Apr 21 10:42:55.586304 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:42:55.604792 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.20.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:42:56.608908 systemd-networkd[1164]: eth0: Gained IPv6LL Apr 21 10:42:57.238903 ignition[1168]: Ignition 2.19.0 Apr 21 10:42:57.238917 ignition[1168]: Stage: fetch-offline Apr 21 10:42:57.239200 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:57.239214 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:57.239664 ignition[1168]: Ignition finished successfully Apr 21 10:42:57.241816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:42:57.244934 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:42:57.262314 ignition[1177]: Ignition 2.19.0 Apr 21 10:42:57.262328 ignition[1177]: Stage: fetch Apr 21 10:42:57.262979 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:57.262993 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:57.263112 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:57.271213 ignition[1177]: PUT result: OK Apr 21 10:42:57.273300 ignition[1177]: parsed url from cmdline: "" Apr 21 10:42:57.273311 ignition[1177]: no config URL provided Apr 21 10:42:57.273321 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:42:57.273336 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:42:57.273359 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:57.273887 ignition[1177]: PUT result: OK Apr 21 10:42:57.273930 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:42:57.274701 ignition[1177]: GET result: OK Apr 21 10:42:57.274812 ignition[1177]: parsing config with SHA512: 7c4a5c3c90082ebc8d86f7c9073144331599d94795da2242eb0e036799728a389fe7372dc922e4cd0d6f3b072eedbee1397a70966483848079a8223c89126846 Apr 21 10:42:57.282401 unknown[1177]: fetched base config from "system" Apr 21 10:42:57.282435 unknown[1177]: fetched base config from "system" Apr 21 10:42:57.283227 ignition[1177]: fetch: fetch complete Apr 21 10:42:57.282444 unknown[1177]: fetched user config from "aws" Apr 21 10:42:57.283250 ignition[1177]: fetch: fetch passed Apr 21 10:42:57.283334 ignition[1177]: Ignition finished successfully Apr 21 10:42:57.285721 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:42:57.291938 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:42:57.307604 ignition[1184]: Ignition 2.19.0 Apr 21 10:42:57.307618 ignition[1184]: Stage: kargs Apr 21 10:42:57.308091 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:57.308105 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:57.308221 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:57.311039 ignition[1184]: PUT result: OK Apr 21 10:42:57.313952 ignition[1184]: kargs: kargs passed Apr 21 10:42:57.314056 ignition[1184]: Ignition finished successfully Apr 21 10:42:57.315591 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:42:57.326926 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:42:57.341399 ignition[1191]: Ignition 2.19.0 Apr 21 10:42:57.341413 ignition[1191]: Stage: disks Apr 21 10:42:57.341913 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:57.341928 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:57.342049 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:57.343171 ignition[1191]: PUT result: OK Apr 21 10:42:57.345721 ignition[1191]: disks: disks passed Apr 21 10:42:57.345778 ignition[1191]: Ignition finished successfully Apr 21 10:42:57.347828 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:42:57.348462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:42:57.348841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:42:57.349369 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:42:57.349930 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:42:57.350616 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:42:57.355873 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:42:57.417566 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:42:57.421515 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:42:57.426839 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:42:57.532710 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:42:57.533344 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:42:57.534640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:42:57.547835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:42:57.550825 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:42:57.553338 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:42:57.553408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:42:57.553446 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:42:57.565436 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:42:57.571963 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1218) Apr 21 10:42:57.577300 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:42:57.577369 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:42:57.577394 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:42:57.576015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:42:57.635702 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:42:57.637060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:42:58.864079 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:42:58.926395 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:42:58.932188 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:42:58.937703 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:42:59.361497 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:42:59.367845 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:42:59.372969 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:42:59.383247 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:42:59.384698 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:42:59.413927 ignition[1336]: INFO : Ignition 2.19.0 Apr 21 10:42:59.414790 ignition[1336]: INFO : Stage: mount Apr 21 10:42:59.415423 ignition[1336]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:59.415992 ignition[1336]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:59.415992 ignition[1336]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:59.418908 ignition[1336]: INFO : PUT result: OK Apr 21 10:42:59.419047 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:42:59.421897 ignition[1336]: INFO : mount: mount passed Apr 21 10:42:59.423260 ignition[1336]: INFO : Ignition finished successfully Apr 21 10:42:59.423795 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:42:59.430829 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:42:59.449013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:42:59.466711 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1348) Apr 21 10:42:59.471134 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:42:59.471207 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:42:59.471230 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:42:59.477702 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:42:59.480271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:42:59.501562 ignition[1365]: INFO : Ignition 2.19.0 Apr 21 10:42:59.501562 ignition[1365]: INFO : Stage: files Apr 21 10:42:59.503134 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:42:59.503134 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:42:59.503134 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:42:59.504463 ignition[1365]: INFO : PUT result: OK Apr 21 10:42:59.505907 ignition[1365]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:42:59.506893 ignition[1365]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:42:59.506893 ignition[1365]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:42:59.573320 ignition[1365]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:42:59.574525 ignition[1365]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:42:59.574525 ignition[1365]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:42:59.574363 unknown[1365]: wrote ssh authorized keys file for user: core Apr 21 10:42:59.585585 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:42:59.585585 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:42:59.670830 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:43:00.109580 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:43:00.109580 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:43:00.112434 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 10:43:00.583930 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:43:01.418376 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:43:01.418376 ignition[1365]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:43:01.435021 ignition[1365]: INFO : files: files passed Apr 21 10:43:01.435021 ignition[1365]: INFO : Ignition finished successfully Apr 21 10:43:01.436267 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:43:01.454693 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:43:01.482617 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:43:01.516517 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:43:01.542120 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:43:01.648700 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:43:01.648700 initrd-setup-root-after-ignition[1394]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:43:01.672124 initrd-setup-root-after-ignition[1398]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:43:01.677399 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:43:01.686823 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:43:01.703943 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:43:01.904317 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:43:01.904461 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:43:01.907521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:43:01.911951 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:43:01.913877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:43:01.923947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:43:02.035264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:43:02.054954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:43:02.094112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:43:02.095158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:43:02.096534 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:43:02.097550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:43:02.097765 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:43:02.099544 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:43:02.100672 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:43:02.101739 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:43:02.102776 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:43:02.103531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:43:02.104346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:43:02.105210 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:43:02.106013 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:43:02.107285 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:43:02.108054 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:43:02.108779 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:43:02.108967 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:43:02.110154 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:43:02.111267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:43:02.112081 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:43:02.112244 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:43:02.112954 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:43:02.113283 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:43:02.114878 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:43:02.115074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:43:02.115804 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:43:02.115968 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:43:02.126010 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:43:02.127914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:43:02.128286 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:43:02.139062 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:43:02.140598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:43:02.140885 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:43:02.143283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:43:02.143479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:43:02.151253 ignition[1418]: INFO : Ignition 2.19.0 Apr 21 10:43:02.151253 ignition[1418]: INFO : Stage: umount Apr 21 10:43:02.155113 ignition[1418]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:43:02.155113 ignition[1418]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:43:02.155113 ignition[1418]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:43:02.155113 ignition[1418]: INFO : PUT result: OK Apr 21 10:43:02.156998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:43:02.165125 ignition[1418]: INFO : umount: umount passed Apr 21 10:43:02.165125 ignition[1418]: INFO : Ignition finished successfully Apr 21 10:43:02.157163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:43:02.167760 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:43:02.167898 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:43:02.170328 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:43:02.170551 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:43:02.171325 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:43:02.171392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:43:02.172814 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:43:02.172877 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:43:02.174948 systemd[1]: Stopped target network.target - Network. Apr 21 10:43:02.175404 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:43:02.175480 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:43:02.175993 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:43:02.176432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:43:02.180880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:43:02.182238 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:43:02.182834 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:43:02.183722 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:43:02.183792 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:43:02.184349 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:43:02.184403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:43:02.184998 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:43:02.185068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:43:02.186032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:43:02.186100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:43:02.187121 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:43:02.187836 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:43:02.189985 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:43:02.192367 systemd-networkd[1164]: eth0: DHCPv6 lease lost Apr 21 10:43:02.194318 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:43:02.194571 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:43:02.195886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:43:02.195937 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:43:02.201840 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:43:02.202554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:43:02.202655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:43:02.206187 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:43:02.211021 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:43:02.211161 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:43:02.220404 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:43:02.220656 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:43:02.224298 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:43:02.224377 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:43:02.225409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:43:02.225459 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:43:02.227557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:43:02.227633 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:43:02.228874 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:43:02.228947 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:43:02.230072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:43:02.230138 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:43:02.238005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:43:02.239700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:43:02.239787 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:43:02.241127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:43:02.241206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:43:02.241900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:43:02.241968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:43:02.242624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:43:02.244757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:43:02.245368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:43:02.245431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:43:02.247391 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:43:02.248560 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:43:02.249695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:43:02.249828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:43:02.452154 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:43:02.452301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:43:02.453608 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:43:02.455091 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:43:02.455185 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:43:02.465169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:43:02.476587 systemd[1]: Switching root. Apr 21 10:43:02.551493 systemd-journald[179]: Journal stopped Apr 21 10:43:05.633298 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 21 10:43:05.633426 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:43:05.633458 kernel: SELinux: policy capability open_perms=1 Apr 21 10:43:05.633478 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:43:05.633505 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:43:05.633528 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:43:05.633546 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:43:05.633563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:43:05.633585 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:43:05.633605 kernel: audit: type=1403 audit(1776768183.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:43:05.633628 systemd[1]: Successfully loaded SELinux policy in 132.310ms. Apr 21 10:43:05.633666 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.015ms. Apr 21 10:43:05.635774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:43:05.635812 systemd[1]: Detected virtualization amazon. Apr 21 10:43:05.635839 systemd[1]: Detected architecture x86-64. Apr 21 10:43:05.635864 systemd[1]: Detected first boot. Apr 21 10:43:05.635900 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:43:05.635926 zram_generator::config[1461]: No configuration found. Apr 21 10:43:05.635953 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:43:05.635978 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:43:05.636003 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:43:05.636028 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:43:05.636055 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:43:05.636081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:43:05.636107 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:43:05.636137 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:43:05.636164 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:43:05.636190 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:43:05.636215 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:43:05.636240 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:43:05.636265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:43:05.636290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:43:05.636325 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:43:05.636354 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:43:05.636380 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:43:05.636406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:43:05.636431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:43:05.636456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:43:05.636481 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:43:05.636506 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:43:05.636531 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:43:05.636563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:43:05.636589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:43:05.636615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:43:05.636641 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:43:05.636665 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:43:05.636764 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:43:05.636792 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:43:05.636817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:43:05.636841 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:43:05.636871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:43:05.636897 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:43:05.636923 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:43:05.636947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:43:05.636972 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:43:05.636998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:05.637023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:43:05.637050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:43:05.637075 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:43:05.637105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:43:05.637129 systemd[1]: Reached target machines.target - Containers. Apr 21 10:43:05.637154 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:43:05.637181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:43:05.637207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:43:05.637232 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:43:05.637256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:43:05.637283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:43:05.637313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:43:05.637339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:43:05.637364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:43:05.637390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:43:05.637417 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:43:05.637443 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:43:05.637469 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:43:05.637493 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:43:05.637519 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:43:05.637547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:43:05.637573 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:43:05.637599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:43:05.637621 kernel: loop: module loaded Apr 21 10:43:05.637648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:43:05.637686 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:43:05.637707 systemd[1]: Stopped verity-setup.service. Apr 21 10:43:05.637725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:05.637745 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:43:05.637772 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:43:05.637798 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:43:05.637823 kernel: ACPI: bus type drm_connector registered Apr 21 10:43:05.637849 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:43:05.637876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:43:05.637906 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:43:05.637932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:43:05.637956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:43:05.637982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:43:05.638007 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:43:05.638034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:43:05.638057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:43:05.638084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:43:05.638115 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:43:05.638144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:43:05.638170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:43:05.638195 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:43:05.638221 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:43:05.638249 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:43:05.638280 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:43:05.638349 systemd-journald[1539]: Collecting audit messages is disabled. Apr 21 10:43:05.638398 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:43:05.638431 kernel: fuse: init (API version 7.39) Apr 21 10:43:05.638456 systemd-journald[1539]: Journal started Apr 21 10:43:05.638512 systemd-journald[1539]: Runtime Journal (/run/log/journal/ec223a508612d4ad4d9956a9ab76c5b2) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:43:05.645960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:43:05.133092 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:43:05.240304 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:43:05.240763 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:43:05.654723 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:43:05.654814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:43:05.676130 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:43:05.676227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:43:05.693825 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:43:05.693933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:43:05.703743 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:43:05.721621 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:43:05.714761 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:43:05.717364 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:43:05.717621 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:43:05.720319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:43:05.720537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:43:05.722090 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:43:05.742160 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:43:05.744080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:43:05.745486 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:43:05.756976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:43:05.757734 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:43:05.770661 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:43:05.773705 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:43:05.786973 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:43:05.794822 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:43:05.800502 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:43:05.808970 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:43:05.811418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:43:05.813959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:43:05.823634 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:43:05.825824 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:43:05.843723 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:43:05.853856 udevadm[1597]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:43:05.857418 systemd-journald[1539]: Time spent on flushing to /var/log/journal/ec223a508612d4ad4d9956a9ab76c5b2 is 33.095ms for 988 entries. Apr 21 10:43:05.857418 systemd-journald[1539]: System Journal (/var/log/journal/ec223a508612d4ad4d9956a9ab76c5b2) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:43:05.898794 systemd-journald[1539]: Received client request to flush runtime journal. Apr 21 10:43:05.905750 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:43:05.909354 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:43:05.915928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:43:05.947567 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:43:06.044066 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Apr 21 10:43:06.044096 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Apr 21 10:43:06.052406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:43:06.388726 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:43:06.409714 kernel: loop1: detected capacity change from 0 to 219192 Apr 21 10:43:06.708474 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:43:06.722060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:43:06.750087 systemd-udevd[1614]: Using default interface naming scheme 'v255'. Apr 21 10:43:06.909707 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:43:07.087603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:43:07.100009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:43:07.140950 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:43:07.175844 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:43:07.195844 (udev-worker)[1619]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:43:07.254797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:43:07.319767 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:43:07.319836 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 21 10:43:07.328606 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:43:07.328744 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 21 10:43:07.333152 kernel: ACPI: button: Sleep Button [SLPF] Apr 21 10:43:07.378792 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:43:07.393702 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:43:07.413952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:43:07.434190 systemd-networkd[1621]: lo: Link UP Apr 21 10:43:07.434200 systemd-networkd[1621]: lo: Gained carrier Apr 21 10:43:07.438388 systemd-networkd[1621]: Enumeration completed Apr 21 10:43:07.439113 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:43:07.439118 systemd-networkd[1621]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:43:07.440052 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:43:07.445009 systemd-networkd[1621]: eth0: Link UP Apr 21 10:43:07.445267 systemd-networkd[1621]: eth0: Gained carrier Apr 21 10:43:07.445301 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:43:07.456653 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:43:07.459100 systemd-networkd[1621]: eth0: DHCPv4 address 172.31.20.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:43:07.497723 kernel: loop3: detected capacity change from 0 to 61336 Apr 21 10:43:07.548708 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1617) Apr 21 10:43:07.674762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:43:07.680612 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:43:07.701909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:43:07.749161 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:43:08.179129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:43:08.187037 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:43:08.193702 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:43:08.204929 lvm[1744]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:43:08.230508 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:43:08.231969 kernel: loop5: detected capacity change from 0 to 219192 Apr 21 10:43:08.233662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:43:08.238953 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:43:08.245701 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:43:08.261706 kernel: loop6: detected capacity change from 0 to 142488 Apr 21 10:43:08.268856 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:43:08.293965 kernel: loop7: detected capacity change from 0 to 61336 Apr 21 10:43:08.310066 (sd-merge)[1745]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:43:08.311335 (sd-merge)[1745]: Merged extensions into '/usr'. Apr 21 10:43:08.324248 systemd[1]: Reloading requested from client PID 1570 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:43:08.324266 systemd[1]: Reloading... Apr 21 10:43:08.391811 zram_generator::config[1773]: No configuration found. Apr 21 10:43:08.538918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:43:08.614113 systemd[1]: Reloading finished in 289 ms. Apr 21 10:43:08.640851 systemd-networkd[1621]: eth0: Gained IPv6LL Apr 21 10:43:08.652357 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:43:08.654743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:43:08.672103 systemd[1]: Starting ensure-sysext.service... Apr 21 10:43:08.675012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:43:08.708383 systemd[1]: Reloading requested from client PID 1828 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:43:08.708403 systemd[1]: Reloading... Apr 21 10:43:08.720015 systemd-tmpfiles[1829]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:43:08.721530 systemd-tmpfiles[1829]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:43:08.724080 systemd-tmpfiles[1829]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:43:08.724729 systemd-tmpfiles[1829]: ACLs are not supported, ignoring. Apr 21 10:43:08.724827 systemd-tmpfiles[1829]: ACLs are not supported, ignoring. Apr 21 10:43:08.729495 systemd-tmpfiles[1829]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:43:08.729646 systemd-tmpfiles[1829]: Skipping /boot Apr 21 10:43:08.741595 systemd-tmpfiles[1829]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:43:08.741843 systemd-tmpfiles[1829]: Skipping /boot Apr 21 10:43:08.816722 zram_generator::config[1858]: No configuration found. Apr 21 10:43:08.946202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:43:09.027399 systemd[1]: Reloading finished in 318 ms. Apr 21 10:43:09.059461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:43:09.071951 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:43:09.077970 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:43:09.091497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:43:09.109146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:43:09.111901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:43:09.125671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.126041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:43:09.132047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:43:09.143608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:43:09.155949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:43:09.157882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:43:09.158091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.165509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.168261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:43:09.168574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:43:09.168737 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.182701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.183076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:43:09.191476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:43:09.192979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:43:09.193295 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:43:09.217807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:43:09.222357 systemd[1]: Finished ensure-sysext.service. Apr 21 10:43:09.229457 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:43:09.240852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:43:09.244951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:43:09.246113 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:43:09.246314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:43:09.249302 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:43:09.250214 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:43:09.252327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:43:09.252931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:43:09.259575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:43:09.259694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:43:09.261180 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:43:09.261629 systemd-resolved[1921]: Positive Trust Anchors: Apr 21 10:43:09.261998 systemd-resolved[1921]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:43:09.262059 augenrules[1940]: No rules Apr 21 10:43:09.262335 systemd-resolved[1921]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:43:09.263554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:43:09.268848 systemd-resolved[1921]: Defaulting to hostname 'linux'. Apr 21 10:43:09.270769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:43:09.271352 systemd[1]: Reached target network.target - Network. Apr 21 10:43:09.271923 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:43:09.272335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:43:09.410945 ldconfig[1566]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:43:09.417004 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:43:09.425217 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:43:09.435494 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:43:09.436480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:43:09.440711 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:43:09.441431 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:43:09.442037 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:43:09.442545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:43:09.443130 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:43:09.443629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:43:09.444113 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:43:09.444501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:43:09.444543 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:43:09.444967 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:43:09.446738 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:43:09.451041 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:43:09.460149 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:43:09.461350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:43:09.461909 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:43:09.462321 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:43:09.462891 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:43:09.462930 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:43:09.464103 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:43:09.468898 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:43:09.474210 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:43:09.477774 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:43:09.481644 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:43:09.482417 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:43:09.486158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:09.494624 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:43:09.497900 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:43:09.515014 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:43:09.523964 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:43:09.528840 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:43:09.534936 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:43:09.544090 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:43:09.557959 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:43:09.559328 jq[1959]: false Apr 21 10:43:09.559063 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:43:09.559733 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:43:09.562954 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:43:09.574031 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:43:09.578328 extend-filesystems[1960]: Found loop4 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found loop5 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found loop6 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found loop7 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p1 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p2 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p3 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found usr Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p4 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p6 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p7 Apr 21 10:43:09.578328 extend-filesystems[1960]: Found nvme0n1p9 Apr 21 10:43:09.578328 extend-filesystems[1960]: Checking size of /dev/nvme0n1p9 Apr 21 10:43:09.584234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:43:09.585800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:43:09.641151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:43:09.641413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:43:09.652244 (ntainerd)[1989]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:43:09.669978 extend-filesystems[1960]: Resized partition /dev/nvme0n1p9 Apr 21 10:43:09.673702 jq[1973]: true Apr 21 10:43:09.687284 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:43:09.700020 update_engine[1972]: I20260421 10:43:09.699150 1972 main.cc:92] Flatcar Update Engine starting Apr 21 10:43:09.690068 dbus-daemon[1958]: [system] SELinux support is enabled Apr 21 10:43:09.694930 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:43:09.717788 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:43:09.704472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:43:09.704512 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:43:09.705067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:43:09.705096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:43:09.721177 dbus-daemon[1958]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1621 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:43:09.727015 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:43:09.727532 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:43:09.727757 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:43:09.730178 tar[1980]: linux-amd64/LICENSE Apr 21 10:43:09.732883 tar[1980]: linux-amd64/helm Apr 21 10:43:09.732123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:43:09.738312 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:43:09.741267 update_engine[1972]: I20260421 10:43:09.740913 1972 update_check_scheduler.cc:74] Next update check in 8m52s Apr 21 10:43:09.752024 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:43:09.763528 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:43:09.767993 jq[2006]: true Apr 21 10:43:09.794921 ntpd[1963]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: ---------------------------------------------------- Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: corporation. Support and training for ntp-4 are Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: available at https://www.nwtime.org/support Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: ---------------------------------------------------- Apr 21 10:43:09.802811 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: proto: precision = 0.093 usec (-23) Apr 21 10:43:09.794952 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:43:09.794962 ntpd[1963]: ---------------------------------------------------- Apr 21 10:43:09.794973 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:43:09.794983 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:43:09.794992 ntpd[1963]: corporation. Support and training for ntp-4 are Apr 21 10:43:09.795002 ntpd[1963]: available at https://www.nwtime.org/support Apr 21 10:43:09.795011 ntpd[1963]: ---------------------------------------------------- Apr 21 10:43:09.800412 ntpd[1963]: proto: precision = 0.093 usec (-23) Apr 21 10:43:09.810132 ntpd[1963]: basedate set to 2026-04-09 Apr 21 10:43:09.814188 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: basedate set to 2026-04-09 Apr 21 10:43:09.814188 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: gps base set to 2026-04-12 (week 2414) Apr 21 10:43:09.810160 ntpd[1963]: gps base set to 2026-04-12 (week 2414) Apr 21 10:43:09.818253 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen normally on 3 eth0 172.31.20.236:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen normally on 4 lo [::1]:123 Apr 21 10:43:09.819704 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listen normally on 5 eth0 [fe80::4be:69ff:fe2a:71e3%2]:123 Apr 21 10:43:09.819341 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:43:09.819535 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:43:09.819572 ntpd[1963]: Listen normally on 3 eth0 172.31.20.236:123 Apr 21 10:43:09.819619 ntpd[1963]: Listen normally on 4 lo [::1]:123 Apr 21 10:43:09.819664 ntpd[1963]: Listen normally on 5 eth0 [fe80::4be:69ff:fe2a:71e3%2]:123 Apr 21 10:43:09.821781 ntpd[1963]: Listening on routing socket on fd #22 for interface updates Apr 21 10:43:09.834760 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: Listening on routing socket on fd #22 for interface updates Apr 21 10:43:09.834760 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:43:09.834760 ntpd[1963]: 21 Apr 10:43:09 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:43:09.829421 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:43:09.829458 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:43:09.855425 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:43:09.872072 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:43:09.876894 systemd-logind[1970]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:43:09.882175 systemd-logind[1970]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 21 10:43:09.882208 systemd-logind[1970]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:43:09.882647 systemd-logind[1970]: New seat seat0. Apr 21 10:43:09.887205 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:43:09.889699 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:43:09.952300 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:43:09.952300 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:43:09.952300 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:43:09.949130 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:43:09.966834 coreos-metadata[1957]: Apr 21 10:43:09.963 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:43:09.966834 coreos-metadata[1957]: Apr 21 10:43:09.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:43:09.966834 coreos-metadata[1957]: Apr 21 10:43:09.965 INFO Fetch successful Apr 21 10:43:09.966834 coreos-metadata[1957]: Apr 21 10:43:09.965 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:43:09.967193 extend-filesystems[1960]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:43:09.949378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:43:09.971593 coreos-metadata[1957]: Apr 21 10:43:09.967 INFO Fetch successful Apr 21 10:43:09.971593 coreos-metadata[1957]: Apr 21 10:43:09.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:43:09.971593 coreos-metadata[1957]: Apr 21 10:43:09.971 INFO Fetch successful Apr 21 10:43:09.971593 coreos-metadata[1957]: Apr 21 10:43:09.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.972 INFO Fetch successful Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.972 INFO Fetch failed with 404: resource not found Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.973 INFO Fetch successful Apr 21 10:43:09.974301 coreos-metadata[1957]: Apr 21 10:43:09.973 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:43:09.977625 coreos-metadata[1957]: Apr 21 10:43:09.974 INFO Fetch successful Apr 21 10:43:09.977625 coreos-metadata[1957]: Apr 21 10:43:09.975 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:43:09.977977 coreos-metadata[1957]: Apr 21 10:43:09.977 INFO Fetch successful Apr 21 10:43:09.977977 coreos-metadata[1957]: Apr 21 10:43:09.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:43:09.979737 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2049) Apr 21 10:43:09.980208 coreos-metadata[1957]: Apr 21 10:43:09.980 INFO Fetch successful Apr 21 10:43:09.980208 coreos-metadata[1957]: Apr 21 10:43:09.980 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:43:09.984578 coreos-metadata[1957]: Apr 21 10:43:09.982 INFO Fetch successful Apr 21 10:43:10.023430 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:43:10.027571 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:43:10.078448 bash[2052]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:43:10.079157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:43:10.094870 systemd[1]: Starting sshkeys.service... Apr 21 10:43:10.172076 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:43:10.184122 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:43:10.216949 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:43:10.217136 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:43:10.222992 dbus-daemon[1958]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:43:10.235508 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:43:10.325603 amazon-ssm-agent[2031]: Initializing new seelog logger Apr 21 10:43:10.327088 amazon-ssm-agent[2031]: New Seelog Logger Creation Complete Apr 21 10:43:10.327088 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.327088 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.330611 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 processing appconfig overrides Apr 21 10:43:10.330611 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.330611 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.330611 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 processing appconfig overrides Apr 21 10:43:10.330611 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO Proxy environment variables: Apr 21 10:43:10.329885 polkitd[2107]: Started polkitd version 121 Apr 21 10:43:10.331150 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.331150 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.331816 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 processing appconfig overrides Apr 21 10:43:10.336648 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.336648 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:43:10.336804 amazon-ssm-agent[2031]: 2026/04/21 10:43:10 processing appconfig overrides Apr 21 10:43:10.377045 locksmithd[2013]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:43:10.377865 coreos-metadata[2097]: Apr 21 10:43:10.377 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:43:10.379769 coreos-metadata[2097]: Apr 21 10:43:10.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:43:10.380979 coreos-metadata[2097]: Apr 21 10:43:10.380 INFO Fetch successful Apr 21 10:43:10.380979 coreos-metadata[2097]: Apr 21 10:43:10.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:43:10.384149 coreos-metadata[2097]: Apr 21 10:43:10.383 INFO Fetch successful Apr 21 10:43:10.442711 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO http_proxy: Apr 21 10:43:10.486773 unknown[2097]: wrote ssh authorized keys file for user: core Apr 21 10:43:10.536350 polkitd[2107]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:43:10.537307 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO no_proxy: Apr 21 10:43:10.536433 polkitd[2107]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:43:10.541875 polkitd[2107]: Finished loading, compiling and executing 2 rules Apr 21 10:43:10.544121 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:43:10.545000 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:43:10.548156 polkitd[2107]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:43:10.555024 update-ssh-keys[2158]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:43:10.556529 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:43:10.562640 systemd[1]: Finished sshkeys.service. Apr 21 10:43:10.635009 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO https_proxy: Apr 21 10:43:10.635786 systemd-hostnamed[2012]: Hostname set to (transient) Apr 21 10:43:10.635923 systemd-resolved[1921]: System hostname changed to 'ip-172-31-20-236'. Apr 21 10:43:10.746090 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:43:10.773398 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:43:10.833389 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:43:10.838720 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:43:10.848005 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:43:10.871940 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:43:10.872188 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:43:10.884132 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:43:10.912032 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:43:10.923959 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:43:10.937662 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:43:10.938856 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:43:10.941892 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO Agent will take identity from EC2 Apr 21 10:43:11.041057 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:43:11.070704 containerd[1989]: time="2026-04-21T10:43:11.067576945Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:43:11.139984 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:43:11.146877 containerd[1989]: time="2026-04-21T10:43:11.146815890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.149656 containerd[1989]: time="2026-04-21T10:43:11.149600748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:43:11.149822 containerd[1989]: time="2026-04-21T10:43:11.149802753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:43:11.149926 containerd[1989]: time="2026-04-21T10:43:11.149909892Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:43:11.150186 containerd[1989]: time="2026-04-21T10:43:11.150166436Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:43:11.150276 containerd[1989]: time="2026-04-21T10:43:11.150260599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.150428 containerd[1989]: time="2026-04-21T10:43:11.150405340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:43:11.150504 containerd[1989]: time="2026-04-21T10:43:11.150489119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.150862 containerd[1989]: time="2026-04-21T10:43:11.150834498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:43:11.150972 containerd[1989]: time="2026-04-21T10:43:11.150954311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151056 containerd[1989]: time="2026-04-21T10:43:11.151038987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151123 containerd[1989]: time="2026-04-21T10:43:11.151109508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151294 containerd[1989]: time="2026-04-21T10:43:11.151277569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151630 containerd[1989]: time="2026-04-21T10:43:11.151609340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151919 containerd[1989]: time="2026-04-21T10:43:11.151895538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:43:11.151993 containerd[1989]: time="2026-04-21T10:43:11.151979559Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:43:11.152163 containerd[1989]: time="2026-04-21T10:43:11.152146480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:43:11.152289 containerd[1989]: time="2026-04-21T10:43:11.152273925Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:43:11.158855 containerd[1989]: time="2026-04-21T10:43:11.158778289Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:43:11.158985 containerd[1989]: time="2026-04-21T10:43:11.158864109Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:43:11.158985 containerd[1989]: time="2026-04-21T10:43:11.158889466Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:43:11.158985 containerd[1989]: time="2026-04-21T10:43:11.158911908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:43:11.158985 containerd[1989]: time="2026-04-21T10:43:11.158933333Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:43:11.159160 containerd[1989]: time="2026-04-21T10:43:11.159108630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159457403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159615621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159638176Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159665025Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159704953Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159727060Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159746480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159767021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159788761Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159808681Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159828253Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159846265Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159874570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.160724 containerd[1989]: time="2026-04-21T10:43:11.159899954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.159945805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.159967382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.159992652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160012165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160029451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160049627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160068563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160090804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160111056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160130304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160148021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160169086Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160207198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160227978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161313 containerd[1989]: time="2026-04-21T10:43:11.160245622Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160300165Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160325106Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160341508Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160359725Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160376421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160395635Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160410459Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:43:11.161897 containerd[1989]: time="2026-04-21T10:43:11.160426138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.160989317Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.161083836Z" level=info msg="Connect containerd service" Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.161133064Z" level=info msg="using legacy CRI server" Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.161144466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.161288441Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:43:11.162201 containerd[1989]: time="2026-04-21T10:43:11.162157711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:43:11.165379 containerd[1989]: time="2026-04-21T10:43:11.162874649Z" level=info msg="Start subscribing containerd event" Apr 21 10:43:11.165379 containerd[1989]: time="2026-04-21T10:43:11.163573371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:43:11.165379 containerd[1989]: time="2026-04-21T10:43:11.163652044Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:43:11.165379 containerd[1989]: time="2026-04-21T10:43:11.163705798Z" level=info msg="Start recovering state" Apr 21 10:43:11.166105 containerd[1989]: time="2026-04-21T10:43:11.165972322Z" level=info msg="Start event monitor" Apr 21 10:43:11.166105 containerd[1989]: time="2026-04-21T10:43:11.165999106Z" level=info msg="Start snapshots syncer" Apr 21 10:43:11.166105 containerd[1989]: time="2026-04-21T10:43:11.166015880Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:43:11.166105 containerd[1989]: time="2026-04-21T10:43:11.166034102Z" level=info msg="Start streaming server" Apr 21 10:43:11.169045 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:43:11.174037 containerd[1989]: time="2026-04-21T10:43:11.173994129Z" level=info msg="containerd successfully booted in 0.107752s" Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [Registrar] Starting registrar module Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:11 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:11 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:11 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:43:11.198262 amazon-ssm-agent[2031]: 2026-04-21 10:43:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:43:11.239408 amazon-ssm-agent[2031]: 2026-04-21 10:43:11 INFO [CredentialRefresher] Next credential rotation will be in 30.249993634866666 minutes Apr 21 10:43:11.278856 tar[1980]: linux-amd64/README.md Apr 21 10:43:11.290489 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:43:12.211373 amazon-ssm-agent[2031]: 2026-04-21 10:43:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:43:12.312661 amazon-ssm-agent[2031]: 2026-04-21 10:43:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2214) started Apr 21 10:43:12.413160 amazon-ssm-agent[2031]: 2026-04-21 10:43:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:43:16.553129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:16.553999 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:43:16.555007 systemd[1]: Startup finished in 611ms (kernel) + 11.866s (initrd) + 13.031s (userspace) = 25.509s. Apr 21 10:43:16.559368 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:43:17.226888 systemd-resolved[1921]: Clock change detected. Flushing caches. Apr 21 10:43:19.355312 kubelet[2234]: E0421 10:43:19.355251 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:43:19.358126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:43:19.358348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:43:19.358899 systemd[1]: kubelet.service: Consumed 1.025s CPU time. Apr 21 10:43:20.219642 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:43:20.231946 systemd[1]: Started sshd@0-172.31.20.236:22-50.85.169.122:57586.service - OpenSSH per-connection server daemon (50.85.169.122:57586). Apr 21 10:43:21.268320 sshd[2242]: Accepted publickey for core from 50.85.169.122 port 57586 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:21.271098 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:21.280706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:43:21.285829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:43:21.289011 systemd-logind[1970]: New session 1 of user core. Apr 21 10:43:21.303837 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:43:21.310849 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:43:21.320423 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:43:21.437353 systemd[2246]: Queued start job for default target default.target. Apr 21 10:43:21.447773 systemd[2246]: Created slice app.slice - User Application Slice. Apr 21 10:43:21.447819 systemd[2246]: Reached target paths.target - Paths. Apr 21 10:43:21.447840 systemd[2246]: Reached target timers.target - Timers. Apr 21 10:43:21.449556 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:43:21.463194 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:43:21.463362 systemd[2246]: Reached target sockets.target - Sockets. Apr 21 10:43:21.463384 systemd[2246]: Reached target basic.target - Basic System. Apr 21 10:43:21.463525 systemd[2246]: Reached target default.target - Main User Target. Apr 21 10:43:21.463573 systemd[2246]: Startup finished in 136ms. Apr 21 10:43:21.463709 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:43:21.479835 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:43:22.205836 systemd[1]: Started sshd@1-172.31.20.236:22-50.85.169.122:57602.service - OpenSSH per-connection server daemon (50.85.169.122:57602). Apr 21 10:43:23.225469 sshd[2257]: Accepted publickey for core from 50.85.169.122 port 57602 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:23.227232 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:23.232764 systemd-logind[1970]: New session 2 of user core. Apr 21 10:43:23.241719 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:43:23.934815 sshd[2257]: pam_unix(sshd:session): session closed for user core Apr 21 10:43:23.939387 systemd-logind[1970]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:43:23.939969 systemd[1]: sshd@1-172.31.20.236:22-50.85.169.122:57602.service: Deactivated successfully. Apr 21 10:43:23.942100 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:43:23.943165 systemd-logind[1970]: Removed session 2. Apr 21 10:43:24.113837 systemd[1]: Started sshd@2-172.31.20.236:22-50.85.169.122:57606.service - OpenSSH per-connection server daemon (50.85.169.122:57606). Apr 21 10:43:25.130739 sshd[2264]: Accepted publickey for core from 50.85.169.122 port 57606 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:25.131404 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:25.136764 systemd-logind[1970]: New session 3 of user core. Apr 21 10:43:25.146689 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:43:25.833608 sshd[2264]: pam_unix(sshd:session): session closed for user core Apr 21 10:43:25.836829 systemd[1]: sshd@2-172.31.20.236:22-50.85.169.122:57606.service: Deactivated successfully. Apr 21 10:43:25.839035 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:43:25.840535 systemd-logind[1970]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:43:25.842246 systemd-logind[1970]: Removed session 3. Apr 21 10:43:26.002908 systemd[1]: Started sshd@3-172.31.20.236:22-50.85.169.122:57616.service - OpenSSH per-connection server daemon (50.85.169.122:57616). Apr 21 10:43:26.987992 sshd[2271]: Accepted publickey for core from 50.85.169.122 port 57616 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:26.989814 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:26.995053 systemd-logind[1970]: New session 4 of user core. Apr 21 10:43:27.004695 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:43:27.674591 sshd[2271]: pam_unix(sshd:session): session closed for user core Apr 21 10:43:27.677885 systemd[1]: sshd@3-172.31.20.236:22-50.85.169.122:57616.service: Deactivated successfully. Apr 21 10:43:27.679914 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:43:27.681713 systemd-logind[1970]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:43:27.683020 systemd-logind[1970]: Removed session 4. Apr 21 10:43:27.845824 systemd[1]: Started sshd@4-172.31.20.236:22-50.85.169.122:57620.service - OpenSSH per-connection server daemon (50.85.169.122:57620). Apr 21 10:43:28.819522 sshd[2278]: Accepted publickey for core from 50.85.169.122 port 57620 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:28.821111 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:28.826716 systemd-logind[1970]: New session 5 of user core. Apr 21 10:43:28.831697 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:43:29.375486 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:43:29.375901 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:43:29.376915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:43:29.387201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:29.392717 sudo[2281]: pam_unix(sudo:session): session closed for user root Apr 21 10:43:29.554893 sshd[2278]: pam_unix(sshd:session): session closed for user core Apr 21 10:43:29.561797 systemd-logind[1970]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:43:29.562783 systemd[1]: sshd@4-172.31.20.236:22-50.85.169.122:57620.service: Deactivated successfully. Apr 21 10:43:29.565074 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:43:29.568149 systemd-logind[1970]: Removed session 5. Apr 21 10:43:29.617983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:29.623876 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:43:29.666253 kubelet[2293]: E0421 10:43:29.666069 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:43:29.670477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:43:29.670693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:43:29.736797 systemd[1]: Started sshd@5-172.31.20.236:22-50.85.169.122:56708.service - OpenSSH per-connection server daemon (50.85.169.122:56708). Apr 21 10:43:30.728801 sshd[2301]: Accepted publickey for core from 50.85.169.122 port 56708 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:30.729662 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:30.735247 systemd-logind[1970]: New session 6 of user core. Apr 21 10:43:30.740740 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:43:31.258519 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:43:31.258922 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:43:31.262831 sudo[2305]: pam_unix(sudo:session): session closed for user root Apr 21 10:43:31.268349 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:43:31.268757 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:43:31.283819 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:43:31.285930 auditctl[2308]: No rules Apr 21 10:43:31.287212 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:43:31.287494 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:43:31.289703 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:43:31.327018 augenrules[2326]: No rules Apr 21 10:43:31.328725 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:43:31.331233 sudo[2304]: pam_unix(sudo:session): session closed for user root Apr 21 10:43:31.493483 sshd[2301]: pam_unix(sshd:session): session closed for user core Apr 21 10:43:31.496805 systemd[1]: sshd@5-172.31.20.236:22-50.85.169.122:56708.service: Deactivated successfully. Apr 21 10:43:31.499020 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:43:31.500570 systemd-logind[1970]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:43:31.502104 systemd-logind[1970]: Removed session 6. Apr 21 10:43:31.677294 systemd[1]: Started sshd@6-172.31.20.236:22-50.85.169.122:56718.service - OpenSSH per-connection server daemon (50.85.169.122:56718). Apr 21 10:43:32.692328 sshd[2334]: Accepted publickey for core from 50.85.169.122 port 56718 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:43:32.693082 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:43:32.698859 systemd-logind[1970]: New session 7 of user core. Apr 21 10:43:32.707725 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:43:33.233374 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:43:33.233789 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:43:36.000859 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:43:36.003490 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:43:37.678099 dockerd[2352]: time="2026-04-21T10:43:37.678035150Z" level=info msg="Starting up" Apr 21 10:43:37.922334 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport666016404-merged.mount: Deactivated successfully. Apr 21 10:43:37.953399 dockerd[2352]: time="2026-04-21T10:43:37.952758461Z" level=info msg="Loading containers: start." Apr 21 10:43:38.154474 kernel: Initializing XFRM netlink socket Apr 21 10:43:38.206935 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:43:38.268814 systemd-networkd[1621]: docker0: Link UP Apr 21 10:43:38.289569 dockerd[2352]: time="2026-04-21T10:43:38.289520186Z" level=info msg="Loading containers: done." Apr 21 10:43:38.320090 dockerd[2352]: time="2026-04-21T10:43:38.320021237Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:43:38.320274 dockerd[2352]: time="2026-04-21T10:43:38.320150958Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:43:38.320327 dockerd[2352]: time="2026-04-21T10:43:38.320287659Z" level=info msg="Daemon has completed initialization" Apr 21 10:43:38.354239 dockerd[2352]: time="2026-04-21T10:43:38.353711335Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:43:38.353940 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:43:39.776890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:43:39.785883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:40.092776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:40.098520 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:43:40.146985 kubelet[2499]: E0421 10:43:40.146910 2499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:43:40.149959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:43:40.150182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:43:40.527173 containerd[1989]: time="2026-04-21T10:43:40.526787774Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 10:43:41.085378 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:43:41.179307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108185361.mount: Deactivated successfully. Apr 21 10:43:42.621812 containerd[1989]: time="2026-04-21T10:43:42.621756482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:42.623175 containerd[1989]: time="2026-04-21T10:43:42.623124966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100514" Apr 21 10:43:42.624192 containerd[1989]: time="2026-04-21T10:43:42.624125946Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:42.631462 containerd[1989]: time="2026-04-21T10:43:42.629968192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:42.634033 containerd[1989]: time="2026-04-21T10:43:42.633984282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.107147182s" Apr 21 10:43:42.634033 containerd[1989]: time="2026-04-21T10:43:42.634034335Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 21 10:43:42.634644 containerd[1989]: time="2026-04-21T10:43:42.634614554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 10:43:43.930617 containerd[1989]: time="2026-04-21T10:43:43.930564645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:43.936485 containerd[1989]: time="2026-04-21T10:43:43.936383552Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252738" Apr 21 10:43:43.945671 containerd[1989]: time="2026-04-21T10:43:43.945620454Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:43.954617 containerd[1989]: time="2026-04-21T10:43:43.954530007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:43.956288 containerd[1989]: time="2026-04-21T10:43:43.955878442Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.321227428s" Apr 21 10:43:43.956288 containerd[1989]: time="2026-04-21T10:43:43.955928420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 21 10:43:43.956702 containerd[1989]: time="2026-04-21T10:43:43.956678197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 10:43:45.021801 containerd[1989]: time="2026-04-21T10:43:45.021734652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:45.023724 containerd[1989]: time="2026-04-21T10:43:45.023670840Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810891" Apr 21 10:43:45.028467 containerd[1989]: time="2026-04-21T10:43:45.026568179Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:45.031192 containerd[1989]: time="2026-04-21T10:43:45.031149233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:45.032293 containerd[1989]: time="2026-04-21T10:43:45.032250703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.075450298s" Apr 21 10:43:45.032473 containerd[1989]: time="2026-04-21T10:43:45.032430017Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 21 10:43:45.033385 containerd[1989]: time="2026-04-21T10:43:45.033336939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 10:43:46.081071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044646792.mount: Deactivated successfully. Apr 21 10:43:46.482482 containerd[1989]: time="2026-04-21T10:43:46.482207191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:46.484041 containerd[1989]: time="2026-04-21T10:43:46.483824080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972954" Apr 21 10:43:46.486828 containerd[1989]: time="2026-04-21T10:43:46.486736181Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:46.489212 containerd[1989]: time="2026-04-21T10:43:46.489161817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:46.490112 containerd[1989]: time="2026-04-21T10:43:46.490051600Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.456462907s" Apr 21 10:43:46.490112 containerd[1989]: time="2026-04-21T10:43:46.490114625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 21 10:43:46.491003 containerd[1989]: time="2026-04-21T10:43:46.490833100Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 10:43:47.007007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127708355.mount: Deactivated successfully. Apr 21 10:43:48.135570 containerd[1989]: time="2026-04-21T10:43:48.135513899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.137466 containerd[1989]: time="2026-04-21T10:43:48.137368276Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Apr 21 10:43:48.138791 containerd[1989]: time="2026-04-21T10:43:48.138729686Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.143544 containerd[1989]: time="2026-04-21T10:43:48.143471206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.146137 containerd[1989]: time="2026-04-21T10:43:48.144824977Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.653931412s" Apr 21 10:43:48.146137 containerd[1989]: time="2026-04-21T10:43:48.145062410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 21 10:43:48.146137 containerd[1989]: time="2026-04-21T10:43:48.145707609Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:43:48.692468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800102673.mount: Deactivated successfully. Apr 21 10:43:48.705609 containerd[1989]: time="2026-04-21T10:43:48.705534646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.707538 containerd[1989]: time="2026-04-21T10:43:48.707466882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 21 10:43:48.710474 containerd[1989]: time="2026-04-21T10:43:48.710173254Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.719012 containerd[1989]: time="2026-04-21T10:43:48.718842596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:48.720339 containerd[1989]: time="2026-04-21T10:43:48.719720644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 573.964407ms" Apr 21 10:43:48.720339 containerd[1989]: time="2026-04-21T10:43:48.719764745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:43:48.720339 containerd[1989]: time="2026-04-21T10:43:48.720275207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 10:43:49.298979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592415515.mount: Deactivated successfully. Apr 21 10:43:50.277090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 10:43:50.285766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:50.645041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:50.657098 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:43:50.694885 containerd[1989]: time="2026-04-21T10:43:50.694821789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:50.701985 containerd[1989]: time="2026-04-21T10:43:50.701845411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874817" Apr 21 10:43:50.711894 containerd[1989]: time="2026-04-21T10:43:50.709833395Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:50.719152 containerd[1989]: time="2026-04-21T10:43:50.719103300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:43:50.721535 containerd[1989]: time="2026-04-21T10:43:50.721484623Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.001177877s" Apr 21 10:43:50.722622 containerd[1989]: time="2026-04-21T10:43:50.722483559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 21 10:43:50.739688 kubelet[2711]: E0421 10:43:50.739644 2711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:43:50.743097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:43:50.743308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:43:54.143905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:54.149805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:54.196983 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-7.scope)... Apr 21 10:43:54.197004 systemd[1]: Reloading... Apr 21 10:43:54.333500 zram_generator::config[2783]: No configuration found. Apr 21 10:43:54.490640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:43:54.576738 systemd[1]: Reloading finished in 379 ms. Apr 21 10:43:54.628630 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:43:54.628746 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:43:54.629047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:54.635843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:43:55.366629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:43:55.371067 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:43:55.432621 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:43:55.432621 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:43:55.434524 kubelet[2847]: I0421 10:43:55.434472 2847 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:43:55.486479 update_engine[1972]: I20260421 10:43:55.486384 1972 update_attempter.cc:509] Updating boot flags... Apr 21 10:43:55.578929 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2867) Apr 21 10:43:55.811473 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2871) Apr 21 10:43:56.046471 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2871) Apr 21 10:43:56.388231 kubelet[2847]: I0421 10:43:56.388182 2847 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:43:56.388231 kubelet[2847]: I0421 10:43:56.388217 2847 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:43:56.389647 kubelet[2847]: I0421 10:43:56.389613 2847 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:43:56.389647 kubelet[2847]: I0421 10:43:56.389650 2847 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:43:56.390471 kubelet[2847]: I0421 10:43:56.390431 2847 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:43:56.409824 kubelet[2847]: I0421 10:43:56.409378 2847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:43:56.414143 kubelet[2847]: E0421 10:43:56.414096 2847 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:43:56.417762 kubelet[2847]: E0421 10:43:56.417723 2847 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:43:56.419458 kubelet[2847]: I0421 10:43:56.417971 2847 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:43:56.420472 kubelet[2847]: I0421 10:43:56.420425 2847 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:43:56.428257 kubelet[2847]: I0421 10:43:56.428182 2847 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:43:56.428495 kubelet[2847]: I0421 10:43:56.428253 2847 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:43:56.428645 kubelet[2847]: I0421 10:43:56.428497 2847 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:43:56.428645 kubelet[2847]: I0421 10:43:56.428515 2847 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:43:56.428726 kubelet[2847]: I0421 10:43:56.428654 2847 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:43:56.430760 kubelet[2847]: I0421 10:43:56.430734 2847 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:43:56.430973 kubelet[2847]: I0421 10:43:56.430955 2847 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:43:56.431047 kubelet[2847]: I0421 10:43:56.430982 2847 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:43:56.431047 kubelet[2847]: I0421 10:43:56.431013 2847 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:43:56.431047 kubelet[2847]: I0421 10:43:56.431032 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:43:56.435456 kubelet[2847]: E0421 10:43:56.434563 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:43:56.435456 kubelet[2847]: E0421 10:43:56.434733 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-236&limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:43:56.435456 kubelet[2847]: I0421 10:43:56.434867 2847 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:43:56.435941 kubelet[2847]: I0421 10:43:56.435532 2847 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:43:56.435941 kubelet[2847]: I0421 10:43:56.435575 2847 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:43:56.435941 kubelet[2847]: W0421 10:43:56.435633 2847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:43:56.439643 kubelet[2847]: I0421 10:43:56.439615 2847 server.go:1262] "Started kubelet" Apr 21 10:43:56.444467 kubelet[2847]: I0421 10:43:56.443575 2847 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:43:56.444746 kubelet[2847]: I0421 10:43:56.444725 2847 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:43:56.444984 kubelet[2847]: I0421 10:43:56.444956 2847 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:43:56.445160 kubelet[2847]: I0421 10:43:56.445145 2847 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:43:56.450803 kubelet[2847]: I0421 10:43:56.450777 2847 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:43:56.451671 kubelet[2847]: I0421 10:43:56.451648 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:43:56.454528 kubelet[2847]: E0421 10:43:56.451659 2847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.236:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-236.18a8594c1d2dd6e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-236,UID:ip-172-31-20-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-236,},FirstTimestamp:2026-04-21 10:43:56.439590624 +0000 UTC m=+1.063919433,LastTimestamp:2026-04-21 10:43:56.439590624 +0000 UTC m=+1.063919433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-236,}" Apr 21 10:43:56.457086 kubelet[2847]: I0421 10:43:56.457052 2847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:43:56.462141 kubelet[2847]: I0421 10:43:56.462099 2847 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:43:56.465342 kubelet[2847]: I0421 10:43:56.464548 2847 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:43:56.465342 kubelet[2847]: E0421 10:43:56.464825 2847 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-236\" not found" Apr 21 10:43:56.465342 kubelet[2847]: I0421 10:43:56.464889 2847 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:43:56.465342 kubelet[2847]: I0421 10:43:56.464947 2847 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:43:56.465851 kubelet[2847]: E0421 10:43:56.465821 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:43:56.465960 kubelet[2847]: E0421 10:43:56.465922 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-236?timeout=10s\": dial tcp 172.31.20.236:6443: connect: connection refused" interval="200ms" Apr 21 10:43:56.470206 kubelet[2847]: E0421 10:43:56.470176 2847 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:43:56.475974 kubelet[2847]: I0421 10:43:56.475946 2847 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:43:56.477155 kubelet[2847]: I0421 10:43:56.476130 2847 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:43:56.477155 kubelet[2847]: I0421 10:43:56.476262 2847 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:43:56.487481 kubelet[2847]: I0421 10:43:56.487425 2847 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:43:56.487481 kubelet[2847]: I0421 10:43:56.487473 2847 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:43:56.487652 kubelet[2847]: I0421 10:43:56.487503 2847 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:43:56.487652 kubelet[2847]: E0421 10:43:56.487554 2847 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:43:56.491745 kubelet[2847]: E0421 10:43:56.491603 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:43:56.504902 kubelet[2847]: I0421 10:43:56.504634 2847 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:43:56.504902 kubelet[2847]: I0421 10:43:56.504654 2847 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:43:56.504902 kubelet[2847]: I0421 10:43:56.504671 2847 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:43:56.506706 kubelet[2847]: I0421 10:43:56.506678 2847 policy_none.go:49] "None policy: Start" Apr 21 10:43:56.506706 kubelet[2847]: I0421 10:43:56.506701 2847 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:43:56.506862 kubelet[2847]: I0421 10:43:56.506716 2847 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:43:56.508049 kubelet[2847]: I0421 10:43:56.508022 2847 policy_none.go:47] "Start" Apr 21 10:43:56.513929 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:43:56.524776 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:43:56.538651 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:43:56.540424 kubelet[2847]: E0421 10:43:56.540396 2847 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:43:56.542023 kubelet[2847]: I0421 10:43:56.540655 2847 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:43:56.542023 kubelet[2847]: I0421 10:43:56.540671 2847 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:43:56.542023 kubelet[2847]: I0421 10:43:56.541110 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:43:56.543182 kubelet[2847]: E0421 10:43:56.542603 2847 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:43:56.543182 kubelet[2847]: E0421 10:43:56.542651 2847 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-236\" not found" Apr 21 10:43:56.609627 systemd[1]: Created slice kubepods-burstable-pod08e1219b9c9a1dff802e11fa62bb2e41.slice - libcontainer container kubepods-burstable-pod08e1219b9c9a1dff802e11fa62bb2e41.slice. Apr 21 10:43:56.617673 kubelet[2847]: E0421 10:43:56.617407 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:56.623813 systemd[1]: Created slice kubepods-burstable-pod75baf26207da95cee7f28a20e2a4e4f1.slice - libcontainer container kubepods-burstable-pod75baf26207da95cee7f28a20e2a4e4f1.slice. Apr 21 10:43:56.626682 kubelet[2847]: E0421 10:43:56.626626 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:56.628629 systemd[1]: Created slice kubepods-burstable-pod5059489b86098a777b871054565efe44.slice - libcontainer container kubepods-burstable-pod5059489b86098a777b871054565efe44.slice. Apr 21 10:43:56.630729 kubelet[2847]: E0421 10:43:56.630701 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:56.643653 kubelet[2847]: I0421 10:43:56.643206 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:43:56.643653 kubelet[2847]: E0421 10:43:56.643603 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.236:6443/api/v1/nodes\": dial tcp 172.31.20.236:6443: connect: connection refused" node="ip-172-31-20-236" Apr 21 10:43:56.666484 kubelet[2847]: E0421 10:43:56.666414 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-236?timeout=10s\": dial tcp 172.31.20.236:6443: connect: connection refused" interval="400ms" Apr 21 10:43:56.767104 kubelet[2847]: I0421 10:43:56.766899 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:43:56.767104 kubelet[2847]: I0421 10:43:56.766948 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-ca-certs\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:43:56.767104 kubelet[2847]: I0421 10:43:56.766978 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:43:56.767104 kubelet[2847]: I0421 10:43:56.767003 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:43:56.767104 kubelet[2847]: I0421 10:43:56.767045 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:43:56.767380 kubelet[2847]: I0421 10:43:56.767075 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5059489b86098a777b871054565efe44-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-236\" (UID: \"5059489b86098a777b871054565efe44\") " pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:43:56.767380 kubelet[2847]: I0421 10:43:56.767097 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:43:56.767380 kubelet[2847]: I0421 10:43:56.767117 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:43:56.767380 kubelet[2847]: I0421 10:43:56.767138 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:43:56.846167 kubelet[2847]: I0421 10:43:56.846132 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:43:56.846535 kubelet[2847]: E0421 10:43:56.846500 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.236:6443/api/v1/nodes\": dial tcp 172.31.20.236:6443: connect: connection refused" node="ip-172-31-20-236" Apr 21 10:43:56.922219 containerd[1989]: time="2026-04-21T10:43:56.922087288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-236,Uid:08e1219b9c9a1dff802e11fa62bb2e41,Namespace:kube-system,Attempt:0,}" Apr 21 10:43:56.929246 containerd[1989]: time="2026-04-21T10:43:56.929189949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-236,Uid:75baf26207da95cee7f28a20e2a4e4f1,Namespace:kube-system,Attempt:0,}" Apr 21 10:43:56.932958 containerd[1989]: time="2026-04-21T10:43:56.932908548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-236,Uid:5059489b86098a777b871054565efe44,Namespace:kube-system,Attempt:0,}" Apr 21 10:43:57.067427 kubelet[2847]: E0421 10:43:57.067385 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-236?timeout=10s\": dial tcp 172.31.20.236:6443: connect: connection refused" interval="800ms" Apr 21 10:43:57.249460 kubelet[2847]: I0421 10:43:57.249100 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:43:57.249460 kubelet[2847]: E0421 10:43:57.249429 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.236:6443/api/v1/nodes\": dial tcp 172.31.20.236:6443: connect: connection refused" node="ip-172-31-20-236" Apr 21 10:43:57.295998 kubelet[2847]: E0421 10:43:57.295959 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:43:57.462015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732619250.mount: Deactivated successfully. Apr 21 10:43:57.471119 containerd[1989]: time="2026-04-21T10:43:57.471065163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:43:57.472045 containerd[1989]: time="2026-04-21T10:43:57.472003707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:43:57.473131 containerd[1989]: time="2026-04-21T10:43:57.473082700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:43:57.474158 containerd[1989]: time="2026-04-21T10:43:57.474105269Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:43:57.475371 containerd[1989]: time="2026-04-21T10:43:57.475327736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:43:57.476336 containerd[1989]: time="2026-04-21T10:43:57.476294334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:43:57.477044 containerd[1989]: time="2026-04-21T10:43:57.476911640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 21 10:43:57.477922 containerd[1989]: time="2026-04-21T10:43:57.477826536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:43:57.479950 containerd[1989]: time="2026-04-21T10:43:57.479720052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.716162ms" Apr 21 10:43:57.481598 containerd[1989]: time="2026-04-21T10:43:57.481560128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.373064ms" Apr 21 10:43:57.488303 containerd[1989]: time="2026-04-21T10:43:57.488252047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.980948ms" Apr 21 10:43:57.552495 kubelet[2847]: E0421 10:43:57.552421 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:43:57.866861 kubelet[2847]: E0421 10:43:57.866730 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:43:57.868680 kubelet[2847]: E0421 10:43:57.868627 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-236?timeout=10s\": dial tcp 172.31.20.236:6443: connect: connection refused" interval="1.6s" Apr 21 10:43:57.892375 kubelet[2847]: E0421 10:43:57.892312 2847 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-236&limit=500&resourceVersion=0\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:43:58.053147 kubelet[2847]: I0421 10:43:58.052554 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:43:58.053720 kubelet[2847]: E0421 10:43:58.053666 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.236:6443/api/v1/nodes\": dial tcp 172.31.20.236:6443: connect: connection refused" node="ip-172-31-20-236" Apr 21 10:43:58.083465 containerd[1989]: time="2026-04-21T10:43:58.083216817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:43:58.083465 containerd[1989]: time="2026-04-21T10:43:58.083289432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:43:58.083465 containerd[1989]: time="2026-04-21T10:43:58.083307054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.085619 containerd[1989]: time="2026-04-21T10:43:58.083421860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.085833 containerd[1989]: time="2026-04-21T10:43:58.085554977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:43:58.085833 containerd[1989]: time="2026-04-21T10:43:58.085664841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:43:58.085833 containerd[1989]: time="2026-04-21T10:43:58.085707093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.086396 containerd[1989]: time="2026-04-21T10:43:58.085920775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.086510 containerd[1989]: time="2026-04-21T10:43:58.086337680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:43:58.087395 containerd[1989]: time="2026-04-21T10:43:58.087338300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:43:58.088819 containerd[1989]: time="2026-04-21T10:43:58.088762765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.089544 containerd[1989]: time="2026-04-21T10:43:58.089463481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:43:58.121663 systemd[1]: Started cri-containerd-3cac72d95bc3879f95dd464bf001282542117c394aa8774767d1330b913e0128.scope - libcontainer container 3cac72d95bc3879f95dd464bf001282542117c394aa8774767d1330b913e0128. Apr 21 10:43:58.137670 systemd[1]: Started cri-containerd-64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d.scope - libcontainer container 64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d. Apr 21 10:43:58.146332 systemd[1]: Started cri-containerd-bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927.scope - libcontainer container bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927. Apr 21 10:43:58.231483 containerd[1989]: time="2026-04-21T10:43:58.231325598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-236,Uid:08e1219b9c9a1dff802e11fa62bb2e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cac72d95bc3879f95dd464bf001282542117c394aa8774767d1330b913e0128\"" Apr 21 10:43:58.244211 containerd[1989]: time="2026-04-21T10:43:58.243944526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-236,Uid:75baf26207da95cee7f28a20e2a4e4f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927\"" Apr 21 10:43:58.253792 containerd[1989]: time="2026-04-21T10:43:58.253747330Z" level=info msg="CreateContainer within sandbox \"bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:43:58.254514 containerd[1989]: time="2026-04-21T10:43:58.254343234Z" level=info msg="CreateContainer within sandbox \"3cac72d95bc3879f95dd464bf001282542117c394aa8774767d1330b913e0128\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:43:58.275746 containerd[1989]: time="2026-04-21T10:43:58.275695584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-236,Uid:5059489b86098a777b871054565efe44,Namespace:kube-system,Attempt:0,} returns sandbox id \"64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d\"" Apr 21 10:43:58.281066 containerd[1989]: time="2026-04-21T10:43:58.280920671Z" level=info msg="CreateContainer within sandbox \"64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:43:58.364347 containerd[1989]: time="2026-04-21T10:43:58.364292231Z" level=info msg="CreateContainer within sandbox \"3cac72d95bc3879f95dd464bf001282542117c394aa8774767d1330b913e0128\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"06b0f46d508fe77d3030201a2bb1cf8a40fbe6c9987741e6b60add273b84ac30\"" Apr 21 10:43:58.365542 containerd[1989]: time="2026-04-21T10:43:58.365506897Z" level=info msg="StartContainer for \"06b0f46d508fe77d3030201a2bb1cf8a40fbe6c9987741e6b60add273b84ac30\"" Apr 21 10:43:58.371457 containerd[1989]: time="2026-04-21T10:43:58.370284139Z" level=info msg="CreateContainer within sandbox \"bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8\"" Apr 21 10:43:58.371457 containerd[1989]: time="2026-04-21T10:43:58.371340939Z" level=info msg="StartContainer for \"4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8\"" Apr 21 10:43:58.373117 containerd[1989]: time="2026-04-21T10:43:58.373012785Z" level=info msg="CreateContainer within sandbox \"64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5\"" Apr 21 10:43:58.374750 containerd[1989]: time="2026-04-21T10:43:58.374707150Z" level=info msg="StartContainer for \"1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5\"" Apr 21 10:43:58.412845 systemd[1]: Started cri-containerd-06b0f46d508fe77d3030201a2bb1cf8a40fbe6c9987741e6b60add273b84ac30.scope - libcontainer container 06b0f46d508fe77d3030201a2bb1cf8a40fbe6c9987741e6b60add273b84ac30. Apr 21 10:43:58.427116 systemd[1]: Started cri-containerd-4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8.scope - libcontainer container 4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8. Apr 21 10:43:58.444694 systemd[1]: Started cri-containerd-1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5.scope - libcontainer container 1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5. Apr 21 10:43:58.541407 containerd[1989]: time="2026-04-21T10:43:58.541364581Z" level=info msg="StartContainer for \"06b0f46d508fe77d3030201a2bb1cf8a40fbe6c9987741e6b60add273b84ac30\" returns successfully" Apr 21 10:43:58.541797 containerd[1989]: time="2026-04-21T10:43:58.541507326Z" level=info msg="StartContainer for \"4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8\" returns successfully" Apr 21 10:43:58.577268 containerd[1989]: time="2026-04-21T10:43:58.577137515Z" level=info msg="StartContainer for \"1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5\" returns successfully" Apr 21 10:43:58.588979 kubelet[2847]: E0421 10:43:58.588950 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:58.590611 kubelet[2847]: E0421 10:43:58.590406 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:58.595643 kubelet[2847]: E0421 10:43:58.595613 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:58.616860 kubelet[2847]: E0421 10:43:58.616816 2847 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.236:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:43:59.598489 kubelet[2847]: E0421 10:43:59.598421 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:59.600114 kubelet[2847]: E0421 10:43:59.600086 2847 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:43:59.658731 kubelet[2847]: I0421 10:43:59.657954 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:44:00.642796 kubelet[2847]: E0421 10:44:00.642761 2847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-236\" not found" node="ip-172-31-20-236" Apr 21 10:44:00.712853 kubelet[2847]: I0421 10:44:00.712728 2847 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-236" Apr 21 10:44:00.713020 kubelet[2847]: E0421 10:44:00.712934 2847 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-236\": node \"ip-172-31-20-236\" not found" Apr 21 10:44:00.755393 kubelet[2847]: E0421 10:44:00.755350 2847 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-236\" not found" Apr 21 10:44:00.868183 kubelet[2847]: I0421 10:44:00.868142 2847 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:00.877314 kubelet[2847]: E0421 10:44:00.877278 2847 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:00.877314 kubelet[2847]: I0421 10:44:00.877307 2847 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:00.879403 kubelet[2847]: E0421 10:44:00.879369 2847 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:00.879403 kubelet[2847]: I0421 10:44:00.879398 2847 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:00.881359 kubelet[2847]: E0421 10:44:00.881320 2847 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:01.450541 kubelet[2847]: I0421 10:44:01.450506 2847 apiserver.go:52] "Watching apiserver" Apr 21 10:44:01.467346 kubelet[2847]: I0421 10:44:01.467267 2847 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:44:03.757213 kubelet[2847]: I0421 10:44:03.757179 2847 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:04.357323 systemd[1]: Reloading requested from client PID 3405 ('systemctl') (unit session-7.scope)... Apr 21 10:44:04.357343 systemd[1]: Reloading... Apr 21 10:44:04.558495 zram_generator::config[3446]: No configuration found. Apr 21 10:44:04.707994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:44:04.820680 systemd[1]: Reloading finished in 462 ms. Apr 21 10:44:04.869289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:44:04.880394 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:44:04.880704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:44:04.880855 systemd[1]: kubelet.service: Consumed 1.254s CPU time, 122.1M memory peak, 0B memory swap peak. Apr 21 10:44:04.890844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:44:05.531225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:44:05.547783 (kubelet)[3505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:44:05.687526 kubelet[3505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:44:05.687526 kubelet[3505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:44:05.687951 kubelet[3505]: I0421 10:44:05.687591 3505 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:44:05.701092 kubelet[3505]: I0421 10:44:05.701040 3505 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:44:05.701092 kubelet[3505]: I0421 10:44:05.701072 3505 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:44:05.702292 kubelet[3505]: I0421 10:44:05.702256 3505 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:44:05.702404 kubelet[3505]: I0421 10:44:05.702355 3505 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:44:05.703781 kubelet[3505]: I0421 10:44:05.702727 3505 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:44:05.705230 kubelet[3505]: I0421 10:44:05.704804 3505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:44:05.713473 kubelet[3505]: I0421 10:44:05.712272 3505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:44:05.718234 kubelet[3505]: E0421 10:44:05.718188 3505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:44:05.718397 kubelet[3505]: I0421 10:44:05.718260 3505 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:44:05.721938 kubelet[3505]: I0421 10:44:05.721891 3505 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:44:05.723475 kubelet[3505]: I0421 10:44:05.723100 3505 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:44:05.723475 kubelet[3505]: I0421 10:44:05.723150 3505 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:44:05.723475 kubelet[3505]: I0421 10:44:05.723365 3505 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:44:05.723475 kubelet[3505]: I0421 10:44:05.723379 3505 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:44:05.723792 kubelet[3505]: I0421 10:44:05.723415 3505 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:44:05.724349 kubelet[3505]: I0421 10:44:05.724155 3505 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:44:05.726480 kubelet[3505]: I0421 10:44:05.725941 3505 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:44:05.726480 kubelet[3505]: I0421 10:44:05.725985 3505 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:44:05.726480 kubelet[3505]: I0421 10:44:05.726015 3505 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:44:05.726480 kubelet[3505]: I0421 10:44:05.726039 3505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:44:05.742707 kubelet[3505]: I0421 10:44:05.742181 3505 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:44:05.743020 kubelet[3505]: I0421 10:44:05.742997 3505 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:44:05.743108 kubelet[3505]: I0421 10:44:05.743053 3505 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:44:05.752240 kubelet[3505]: I0421 10:44:05.752210 3505 server.go:1262] "Started kubelet" Apr 21 10:44:05.759463 kubelet[3505]: I0421 10:44:05.756575 3505 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:44:05.759463 kubelet[3505]: I0421 10:44:05.756636 3505 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:44:05.759463 kubelet[3505]: I0421 10:44:05.756978 3505 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:44:05.759463 kubelet[3505]: I0421 10:44:05.757061 3505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:44:05.759463 kubelet[3505]: I0421 10:44:05.758788 3505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:44:05.761237 kubelet[3505]: I0421 10:44:05.760581 3505 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:44:05.771826 kubelet[3505]: I0421 10:44:05.770423 3505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:44:05.776218 kubelet[3505]: I0421 10:44:05.772880 3505 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:44:05.778370 kubelet[3505]: I0421 10:44:05.772921 3505 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:44:05.778552 kubelet[3505]: E0421 10:44:05.773177 3505 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-236\" not found" Apr 21 10:44:05.778812 kubelet[3505]: I0421 10:44:05.778801 3505 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:44:05.784124 kubelet[3505]: I0421 10:44:05.784010 3505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:44:05.786642 kubelet[3505]: I0421 10:44:05.786614 3505 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:44:05.786642 kubelet[3505]: I0421 10:44:05.786639 3505 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:44:05.797546 kubelet[3505]: I0421 10:44:05.797502 3505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:44:05.798675 kubelet[3505]: I0421 10:44:05.798641 3505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:44:05.800870 kubelet[3505]: I0421 10:44:05.800826 3505 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:44:05.800974 kubelet[3505]: I0421 10:44:05.800889 3505 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:44:05.800974 kubelet[3505]: E0421 10:44:05.800944 3505 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:44:05.859084 kubelet[3505]: I0421 10:44:05.858882 3505 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:44:05.859084 kubelet[3505]: I0421 10:44:05.858903 3505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:44:05.859084 kubelet[3505]: I0421 10:44:05.858926 3505 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:44:05.859084 kubelet[3505]: I0421 10:44:05.859084 3505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859096 3505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859116 3505 policy_none.go:49] "None policy: Start" Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859128 3505 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859140 3505 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859263 3505 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:44:05.859362 kubelet[3505]: I0421 10:44:05.859273 3505 policy_none.go:47] "Start" Apr 21 10:44:05.868498 kubelet[3505]: E0421 10:44:05.868113 3505 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:44:05.868498 kubelet[3505]: I0421 10:44:05.868305 3505 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:44:05.868498 kubelet[3505]: I0421 10:44:05.868316 3505 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:44:05.870183 kubelet[3505]: I0421 10:44:05.870100 3505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:44:05.874556 kubelet[3505]: E0421 10:44:05.873920 3505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:44:05.902270 kubelet[3505]: I0421 10:44:05.902229 3505 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:05.902704 kubelet[3505]: I0421 10:44:05.902562 3505 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:05.912354 kubelet[3505]: I0421 10:44:05.912253 3505 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.971300 kubelet[3505]: I0421 10:44:05.971268 3505 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-236" Apr 21 10:44:05.979795 kubelet[3505]: I0421 10:44:05.979753 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:05.979795 kubelet[3505]: I0421 10:44:05.979801 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:05.980009 kubelet[3505]: I0421 10:44:05.979828 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.980009 kubelet[3505]: I0421 10:44:05.979848 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.980009 kubelet[3505]: I0421 10:44:05.979868 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.980009 kubelet[3505]: I0421 10:44:05.979886 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08e1219b9c9a1dff802e11fa62bb2e41-ca-certs\") pod \"kube-apiserver-ip-172-31-20-236\" (UID: \"08e1219b9c9a1dff802e11fa62bb2e41\") " pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:05.980009 kubelet[3505]: I0421 10:44:05.979907 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.980229 kubelet[3505]: I0421 10:44:05.979938 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75baf26207da95cee7f28a20e2a4e4f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-236\" (UID: \"75baf26207da95cee7f28a20e2a4e4f1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-236" Apr 21 10:44:05.980229 kubelet[3505]: I0421 10:44:05.979959 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5059489b86098a777b871054565efe44-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-236\" (UID: \"5059489b86098a777b871054565efe44\") " pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:05.983912 kubelet[3505]: I0421 10:44:05.983883 3505 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-236" Apr 21 10:44:05.983912 kubelet[3505]: I0421 10:44:05.984017 3505 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-236" Apr 21 10:44:06.004352 kubelet[3505]: E0421 10:44:06.004298 3505 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-236\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:06.733293 kubelet[3505]: I0421 10:44:06.733232 3505 apiserver.go:52] "Watching apiserver" Apr 21 10:44:06.779359 kubelet[3505]: I0421 10:44:06.779313 3505 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:44:06.841407 kubelet[3505]: I0421 10:44:06.841374 3505 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:06.841754 kubelet[3505]: I0421 10:44:06.841725 3505 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:06.852272 kubelet[3505]: E0421 10:44:06.851952 3505 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-236\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-236" Apr 21 10:44:06.853841 kubelet[3505]: E0421 10:44:06.853609 3505 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-236\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-236" Apr 21 10:44:06.876169 kubelet[3505]: I0421 10:44:06.876032 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-236" podStartSLOduration=3.875988339 podStartE2EDuration="3.875988339s" podCreationTimestamp="2026-04-21 10:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:44:06.875264068 +0000 UTC m=+1.305289186" watchObservedRunningTime="2026-04-21 10:44:06.875988339 +0000 UTC m=+1.306013424" Apr 21 10:44:06.876407 kubelet[3505]: I0421 10:44:06.876212 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-236" podStartSLOduration=1.876200606 podStartE2EDuration="1.876200606s" podCreationTimestamp="2026-04-21 10:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:44:06.86466375 +0000 UTC m=+1.294688844" watchObservedRunningTime="2026-04-21 10:44:06.876200606 +0000 UTC m=+1.306225696" Apr 21 10:44:06.907198 kubelet[3505]: I0421 10:44:06.906573 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-236" podStartSLOduration=1.9065537080000001 podStartE2EDuration="1.906553708s" podCreationTimestamp="2026-04-21 10:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:44:06.894538474 +0000 UTC m=+1.324563570" watchObservedRunningTime="2026-04-21 10:44:06.906553708 +0000 UTC m=+1.336578801" Apr 21 10:44:09.422231 kubelet[3505]: I0421 10:44:09.422183 3505 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:44:09.422848 containerd[1989]: time="2026-04-21T10:44:09.422621196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:44:09.423232 kubelet[3505]: I0421 10:44:09.422840 3505 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:44:10.201858 systemd[1]: Created slice kubepods-besteffort-podbad53976_a8cc_485e_9f3c_0f2d0b2ab378.slice - libcontainer container kubepods-besteffort-podbad53976_a8cc_485e_9f3c_0f2d0b2ab378.slice. Apr 21 10:44:10.212333 kubelet[3505]: I0421 10:44:10.212298 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bad53976-a8cc-485e-9f3c-0f2d0b2ab378-kube-proxy\") pod \"kube-proxy-8jflz\" (UID: \"bad53976-a8cc-485e-9f3c-0f2d0b2ab378\") " pod="kube-system/kube-proxy-8jflz" Apr 21 10:44:10.212333 kubelet[3505]: I0421 10:44:10.212335 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bad53976-a8cc-485e-9f3c-0f2d0b2ab378-xtables-lock\") pod \"kube-proxy-8jflz\" (UID: \"bad53976-a8cc-485e-9f3c-0f2d0b2ab378\") " pod="kube-system/kube-proxy-8jflz" Apr 21 10:44:10.215713 kubelet[3505]: I0421 10:44:10.212376 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cxg\" (UniqueName: \"kubernetes.io/projected/bad53976-a8cc-485e-9f3c-0f2d0b2ab378-kube-api-access-v2cxg\") pod \"kube-proxy-8jflz\" (UID: \"bad53976-a8cc-485e-9f3c-0f2d0b2ab378\") " pod="kube-system/kube-proxy-8jflz" Apr 21 10:44:10.215713 kubelet[3505]: I0421 10:44:10.212407 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bad53976-a8cc-485e-9f3c-0f2d0b2ab378-lib-modules\") pod \"kube-proxy-8jflz\" (UID: \"bad53976-a8cc-485e-9f3c-0f2d0b2ab378\") " pod="kube-system/kube-proxy-8jflz" Apr 21 10:44:10.516034 containerd[1989]: time="2026-04-21T10:44:10.515776775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jflz,Uid:bad53976-a8cc-485e-9f3c-0f2d0b2ab378,Namespace:kube-system,Attempt:0,}" Apr 21 10:44:10.547665 containerd[1989]: time="2026-04-21T10:44:10.547550194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:44:10.547665 containerd[1989]: time="2026-04-21T10:44:10.547605091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:44:10.547665 containerd[1989]: time="2026-04-21T10:44:10.547620392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:10.548005 containerd[1989]: time="2026-04-21T10:44:10.547720881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:10.575132 systemd[1]: run-containerd-runc-k8s.io-9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004-runc.RUSdL5.mount: Deactivated successfully. Apr 21 10:44:10.585785 systemd[1]: Started cri-containerd-9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004.scope - libcontainer container 9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004. Apr 21 10:44:10.617341 containerd[1989]: time="2026-04-21T10:44:10.617004554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jflz,Uid:bad53976-a8cc-485e-9f3c-0f2d0b2ab378,Namespace:kube-system,Attempt:0,} returns sandbox id \"9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004\"" Apr 21 10:44:10.633536 containerd[1989]: time="2026-04-21T10:44:10.632890673Z" level=info msg="CreateContainer within sandbox \"9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:44:10.653421 containerd[1989]: time="2026-04-21T10:44:10.653372165Z" level=info msg="CreateContainer within sandbox \"9baac49e7e0e9a64e8f089635ddf99c70f6857d37f3d65f1a391698d8ade7004\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1eb775d0e464e4628e651014047947f79a7f03e02eca26cf3f9f5554a0f93474\"" Apr 21 10:44:10.655002 containerd[1989]: time="2026-04-21T10:44:10.654524227Z" level=info msg="StartContainer for \"1eb775d0e464e4628e651014047947f79a7f03e02eca26cf3f9f5554a0f93474\"" Apr 21 10:44:10.710628 systemd[1]: Created slice kubepods-besteffort-podb9df8161_f168_42ad_bbd6_035d00306582.slice - libcontainer container kubepods-besteffort-podb9df8161_f168_42ad_bbd6_035d00306582.slice. Apr 21 10:44:10.716722 kubelet[3505]: I0421 10:44:10.716676 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjfq\" (UniqueName: \"kubernetes.io/projected/b9df8161-f168-42ad-bbd6-035d00306582-kube-api-access-lbjfq\") pod \"tigera-operator-5588576f44-kxzs9\" (UID: \"b9df8161-f168-42ad-bbd6-035d00306582\") " pod="tigera-operator/tigera-operator-5588576f44-kxzs9" Apr 21 10:44:10.717177 kubelet[3505]: I0421 10:44:10.716737 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9df8161-f168-42ad-bbd6-035d00306582-var-lib-calico\") pod \"tigera-operator-5588576f44-kxzs9\" (UID: \"b9df8161-f168-42ad-bbd6-035d00306582\") " pod="tigera-operator/tigera-operator-5588576f44-kxzs9" Apr 21 10:44:10.728786 systemd[1]: Started cri-containerd-1eb775d0e464e4628e651014047947f79a7f03e02eca26cf3f9f5554a0f93474.scope - libcontainer container 1eb775d0e464e4628e651014047947f79a7f03e02eca26cf3f9f5554a0f93474. Apr 21 10:44:10.768387 containerd[1989]: time="2026-04-21T10:44:10.768219221Z" level=info msg="StartContainer for \"1eb775d0e464e4628e651014047947f79a7f03e02eca26cf3f9f5554a0f93474\" returns successfully" Apr 21 10:44:10.873569 kubelet[3505]: I0421 10:44:10.873478 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8jflz" podStartSLOduration=0.873449391 podStartE2EDuration="873.449391ms" podCreationTimestamp="2026-04-21 10:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:44:10.873274132 +0000 UTC m=+5.303299227" watchObservedRunningTime="2026-04-21 10:44:10.873449391 +0000 UTC m=+5.303474479" Apr 21 10:44:11.025880 containerd[1989]: time="2026-04-21T10:44:11.025752950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-kxzs9,Uid:b9df8161-f168-42ad-bbd6-035d00306582,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:44:11.049665 containerd[1989]: time="2026-04-21T10:44:11.049516274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:44:11.049665 containerd[1989]: time="2026-04-21T10:44:11.049623969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:44:11.050377 containerd[1989]: time="2026-04-21T10:44:11.049646609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:11.050377 containerd[1989]: time="2026-04-21T10:44:11.049830766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:11.070686 systemd[1]: Started cri-containerd-49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa.scope - libcontainer container 49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa. Apr 21 10:44:11.127103 containerd[1989]: time="2026-04-21T10:44:11.126926359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-kxzs9,Uid:b9df8161-f168-42ad-bbd6-035d00306582,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa\"" Apr 21 10:44:11.130165 containerd[1989]: time="2026-04-21T10:44:11.130130269Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:44:12.466039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195637916.mount: Deactivated successfully. Apr 21 10:44:14.111396 containerd[1989]: time="2026-04-21T10:44:14.111340843Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:14.112646 containerd[1989]: time="2026-04-21T10:44:14.112445644Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:44:14.114467 containerd[1989]: time="2026-04-21T10:44:14.113539850Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:14.116031 containerd[1989]: time="2026-04-21T10:44:14.115979075Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:14.116987 containerd[1989]: time="2026-04-21T10:44:14.116795750Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.986627476s" Apr 21 10:44:14.116987 containerd[1989]: time="2026-04-21T10:44:14.116837586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:44:14.121898 containerd[1989]: time="2026-04-21T10:44:14.121778903Z" level=info msg="CreateContainer within sandbox \"49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:44:14.143802 containerd[1989]: time="2026-04-21T10:44:14.143751602Z" level=info msg="CreateContainer within sandbox \"49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7\"" Apr 21 10:44:14.144564 containerd[1989]: time="2026-04-21T10:44:14.144530673Z" level=info msg="StartContainer for \"99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7\"" Apr 21 10:44:14.188864 systemd[1]: Started cri-containerd-99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7.scope - libcontainer container 99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7. Apr 21 10:44:14.220162 containerd[1989]: time="2026-04-21T10:44:14.220114553Z" level=info msg="StartContainer for \"99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7\" returns successfully" Apr 21 10:44:16.599814 kubelet[3505]: I0421 10:44:16.599746 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-kxzs9" podStartSLOduration=3.610584515 podStartE2EDuration="6.599727603s" podCreationTimestamp="2026-04-21 10:44:10 +0000 UTC" firstStartedPulling="2026-04-21 10:44:11.128949816 +0000 UTC m=+5.558974889" lastFinishedPulling="2026-04-21 10:44:14.1180929 +0000 UTC m=+8.548117977" observedRunningTime="2026-04-21 10:44:14.881817484 +0000 UTC m=+9.311842579" watchObservedRunningTime="2026-04-21 10:44:16.599727603 +0000 UTC m=+11.029752697" Apr 21 10:44:19.755374 sudo[2337]: pam_unix(sudo:session): session closed for user root Apr 21 10:44:19.926092 sshd[2334]: pam_unix(sshd:session): session closed for user core Apr 21 10:44:19.932904 systemd[1]: sshd@6-172.31.20.236:22-50.85.169.122:56718.service: Deactivated successfully. Apr 21 10:44:19.937216 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:44:19.938288 systemd[1]: session-7.scope: Consumed 5.960s CPU time, 149.6M memory peak, 0B memory swap peak. Apr 21 10:44:19.942888 systemd-logind[1970]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:44:19.947818 systemd-logind[1970]: Removed session 7. Apr 21 10:44:23.557638 systemd[1]: Created slice kubepods-besteffort-podfa480568_e16f_4790_a047_790b60a2c0dc.slice - libcontainer container kubepods-besteffort-podfa480568_e16f_4790_a047_790b60a2c0dc.slice. Apr 21 10:44:23.616486 kubelet[3505]: I0421 10:44:23.616423 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46cw\" (UniqueName: \"kubernetes.io/projected/fa480568-e16f-4790-a047-790b60a2c0dc-kube-api-access-w46cw\") pod \"calico-typha-69c69d6d5f-g2x85\" (UID: \"fa480568-e16f-4790-a047-790b60a2c0dc\") " pod="calico-system/calico-typha-69c69d6d5f-g2x85" Apr 21 10:44:23.617078 kubelet[3505]: I0421 10:44:23.616507 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa480568-e16f-4790-a047-790b60a2c0dc-tigera-ca-bundle\") pod \"calico-typha-69c69d6d5f-g2x85\" (UID: \"fa480568-e16f-4790-a047-790b60a2c0dc\") " pod="calico-system/calico-typha-69c69d6d5f-g2x85" Apr 21 10:44:23.617078 kubelet[3505]: I0421 10:44:23.616531 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa480568-e16f-4790-a047-790b60a2c0dc-typha-certs\") pod \"calico-typha-69c69d6d5f-g2x85\" (UID: \"fa480568-e16f-4790-a047-790b60a2c0dc\") " pod="calico-system/calico-typha-69c69d6d5f-g2x85" Apr 21 10:44:23.738036 systemd[1]: Created slice kubepods-besteffort-pod249cfda1_1b0d_4e84_a72d_53cc5edc3fef.slice - libcontainer container kubepods-besteffort-pod249cfda1_1b0d_4e84_a72d_53cc5edc3fef.slice. Apr 21 10:44:23.817262 kubelet[3505]: I0421 10:44:23.817088 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-cni-net-dir\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.817262 kubelet[3505]: I0421 10:44:23.817144 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-var-run-calico\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.818542 kubelet[3505]: I0421 10:44:23.817171 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-cni-bin-dir\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.818542 kubelet[3505]: I0421 10:44:23.818134 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-tigera-ca-bundle\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.818542 kubelet[3505]: I0421 10:44:23.818167 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-cni-log-dir\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.818542 kubelet[3505]: I0421 10:44:23.818190 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-policysync\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.818542 kubelet[3505]: I0421 10:44:23.818209 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-sys-fs\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819160 kubelet[3505]: I0421 10:44:23.818232 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8prj\" (UniqueName: \"kubernetes.io/projected/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-kube-api-access-r8prj\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819160 kubelet[3505]: I0421 10:44:23.818259 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-bpffs\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819160 kubelet[3505]: I0421 10:44:23.818281 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-flexvol-driver-host\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819160 kubelet[3505]: I0421 10:44:23.818303 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-lib-modules\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819160 kubelet[3505]: I0421 10:44:23.818325 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-var-lib-calico\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819363 kubelet[3505]: I0421 10:44:23.818350 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-node-certs\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819363 kubelet[3505]: I0421 10:44:23.818377 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-nodeproc\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.819363 kubelet[3505]: I0421 10:44:23.818411 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249cfda1-1b0d-4e84-a72d-53cc5edc3fef-xtables-lock\") pod \"calico-node-xkcqj\" (UID: \"249cfda1-1b0d-4e84-a72d-53cc5edc3fef\") " pod="calico-system/calico-node-xkcqj" Apr 21 10:44:23.827403 kubelet[3505]: E0421 10:44:23.825847 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:23.866744 containerd[1989]: time="2026-04-21T10:44:23.866703449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c69d6d5f-g2x85,Uid:fa480568-e16f-4790-a047-790b60a2c0dc,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:23.920919 kubelet[3505]: I0421 10:44:23.919018 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fbe93840-23f5-4bbe-b319-4df10f6383eb-socket-dir\") pod \"csi-node-driver-68cq7\" (UID: \"fbe93840-23f5-4bbe-b319-4df10f6383eb\") " pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:23.920919 kubelet[3505]: I0421 10:44:23.919103 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbe93840-23f5-4bbe-b319-4df10f6383eb-registration-dir\") pod \"csi-node-driver-68cq7\" (UID: \"fbe93840-23f5-4bbe-b319-4df10f6383eb\") " pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:23.920919 kubelet[3505]: I0421 10:44:23.919118 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fbe93840-23f5-4bbe-b319-4df10f6383eb-varrun\") pod \"csi-node-driver-68cq7\" (UID: \"fbe93840-23f5-4bbe-b319-4df10f6383eb\") " pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:23.920919 kubelet[3505]: I0421 10:44:23.919153 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x9r\" (UniqueName: \"kubernetes.io/projected/fbe93840-23f5-4bbe-b319-4df10f6383eb-kube-api-access-s5x9r\") pod \"csi-node-driver-68cq7\" (UID: \"fbe93840-23f5-4bbe-b319-4df10f6383eb\") " pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:23.920919 kubelet[3505]: I0421 10:44:23.919170 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbe93840-23f5-4bbe-b319-4df10f6383eb-kubelet-dir\") pod \"csi-node-driver-68cq7\" (UID: \"fbe93840-23f5-4bbe-b319-4df10f6383eb\") " pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:23.928385 kubelet[3505]: E0421 10:44:23.928239 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:23.928385 kubelet[3505]: W0421 10:44:23.928302 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:23.928385 kubelet[3505]: E0421 10:44:23.928330 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:23.936048 kubelet[3505]: E0421 10:44:23.936019 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:23.939681 kubelet[3505]: W0421 10:44:23.939521 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:23.939681 kubelet[3505]: E0421 10:44:23.939612 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:23.950895 kubelet[3505]: E0421 10:44:23.950865 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:23.951149 kubelet[3505]: W0421 10:44:23.951056 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:23.951149 kubelet[3505]: E0421 10:44:23.951087 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:23.978512 containerd[1989]: time="2026-04-21T10:44:23.978219901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:44:23.978512 containerd[1989]: time="2026-04-21T10:44:23.978287827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:44:23.978512 containerd[1989]: time="2026-04-21T10:44:23.978312896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:23.978512 containerd[1989]: time="2026-04-21T10:44:23.978406514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:24.021120 kubelet[3505]: E0421 10:44:24.021008 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.021493 kubelet[3505]: W0421 10:44:24.021467 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.021792 kubelet[3505]: E0421 10:44:24.021749 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.023003 kubelet[3505]: E0421 10:44:24.022978 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.023175 kubelet[3505]: W0421 10:44:24.023109 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.023175 kubelet[3505]: E0421 10:44:24.023135 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.024536 kubelet[3505]: E0421 10:44:24.023936 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.024536 kubelet[3505]: W0421 10:44:24.023953 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.024536 kubelet[3505]: E0421 10:44:24.024095 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.025427 kubelet[3505]: E0421 10:44:24.025411 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.025537 kubelet[3505]: W0421 10:44:24.025522 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.025611 kubelet[3505]: E0421 10:44:24.025600 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.026010 kubelet[3505]: E0421 10:44:24.025996 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.026095 kubelet[3505]: W0421 10:44:24.026081 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.026176 kubelet[3505]: E0421 10:44:24.026164 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.026669 systemd[1]: Started cri-containerd-adac44fb6060c4ddf2d622037bf7e737c07cffe94a8ae37df41b2ef02edd1bc8.scope - libcontainer container adac44fb6060c4ddf2d622037bf7e737c07cffe94a8ae37df41b2ef02edd1bc8. Apr 21 10:44:24.027852 kubelet[3505]: E0421 10:44:24.027838 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.027951 kubelet[3505]: W0421 10:44:24.027938 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.028031 kubelet[3505]: E0421 10:44:24.028019 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.028832 kubelet[3505]: E0421 10:44:24.028809 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.028919 kubelet[3505]: W0421 10:44:24.028843 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.028919 kubelet[3505]: E0421 10:44:24.028859 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.029421 kubelet[3505]: E0421 10:44:24.029370 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.029421 kubelet[3505]: W0421 10:44:24.029384 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.029848 kubelet[3505]: E0421 10:44:24.029826 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.030209 kubelet[3505]: E0421 10:44:24.030189 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.030209 kubelet[3505]: W0421 10:44:24.030204 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.030341 kubelet[3505]: E0421 10:44:24.030218 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.030984 kubelet[3505]: E0421 10:44:24.030929 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.030984 kubelet[3505]: W0421 10:44:24.030962 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.030984 kubelet[3505]: E0421 10:44:24.030976 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.031319 kubelet[3505]: E0421 10:44:24.031302 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.031400 kubelet[3505]: W0421 10:44:24.031321 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.031400 kubelet[3505]: E0421 10:44:24.031345 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.033553 kubelet[3505]: E0421 10:44:24.033524 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.033553 kubelet[3505]: W0421 10:44:24.033540 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.033676 kubelet[3505]: E0421 10:44:24.033565 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.033909 kubelet[3505]: E0421 10:44:24.033877 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.033909 kubelet[3505]: W0421 10:44:24.033891 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.033909 kubelet[3505]: E0421 10:44:24.033905 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.034290 kubelet[3505]: E0421 10:44:24.034270 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.034290 kubelet[3505]: W0421 10:44:24.034289 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.034524 kubelet[3505]: E0421 10:44:24.034502 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.035711 kubelet[3505]: E0421 10:44:24.035689 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.035711 kubelet[3505]: W0421 10:44:24.035707 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.035854 kubelet[3505]: E0421 10:44:24.035721 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.036763 kubelet[3505]: E0421 10:44:24.036741 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.036763 kubelet[3505]: W0421 10:44:24.036756 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.036878 kubelet[3505]: E0421 10:44:24.036770 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.037414 kubelet[3505]: E0421 10:44:24.037394 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.037414 kubelet[3505]: W0421 10:44:24.037410 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.037563 kubelet[3505]: E0421 10:44:24.037424 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.038755 kubelet[3505]: E0421 10:44:24.038734 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.038755 kubelet[3505]: W0421 10:44:24.038750 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.038885 kubelet[3505]: E0421 10:44:24.038772 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.039120 kubelet[3505]: E0421 10:44:24.039100 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.039120 kubelet[3505]: W0421 10:44:24.039115 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.039233 kubelet[3505]: E0421 10:44:24.039129 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.039397 kubelet[3505]: E0421 10:44:24.039381 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.039397 kubelet[3505]: W0421 10:44:24.039396 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.039533 kubelet[3505]: E0421 10:44:24.039409 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.040896 kubelet[3505]: E0421 10:44:24.040728 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.040896 kubelet[3505]: W0421 10:44:24.040743 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.040896 kubelet[3505]: E0421 10:44:24.040757 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.041125 kubelet[3505]: E0421 10:44:24.041018 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.041125 kubelet[3505]: W0421 10:44:24.041029 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.041125 kubelet[3505]: E0421 10:44:24.041043 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.041505 kubelet[3505]: E0421 10:44:24.041484 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.041505 kubelet[3505]: W0421 10:44:24.041498 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.041660 kubelet[3505]: E0421 10:44:24.041511 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.042257 kubelet[3505]: E0421 10:44:24.042236 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.042367 kubelet[3505]: W0421 10:44:24.042348 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.042430 kubelet[3505]: E0421 10:44:24.042368 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.043562 kubelet[3505]: E0421 10:44:24.043545 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.043562 kubelet[3505]: W0421 10:44:24.043561 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.043719 kubelet[3505]: E0421 10:44:24.043576 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.050073 containerd[1989]: time="2026-04-21T10:44:24.050013871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkcqj,Uid:249cfda1-1b0d-4e84-a72d-53cc5edc3fef,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:24.055199 kubelet[3505]: E0421 10:44:24.053819 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:24.055199 kubelet[3505]: W0421 10:44:24.053842 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:24.055199 kubelet[3505]: E0421 10:44:24.053867 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:24.117769 containerd[1989]: time="2026-04-21T10:44:24.115214941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c69d6d5f-g2x85,Uid:fa480568-e16f-4790-a047-790b60a2c0dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"adac44fb6060c4ddf2d622037bf7e737c07cffe94a8ae37df41b2ef02edd1bc8\"" Apr 21 10:44:24.122550 containerd[1989]: time="2026-04-21T10:44:24.121626221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:44:24.160855 containerd[1989]: time="2026-04-21T10:44:24.157896499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:44:24.160855 containerd[1989]: time="2026-04-21T10:44:24.160515205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:44:24.160855 containerd[1989]: time="2026-04-21T10:44:24.160530359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:24.160855 containerd[1989]: time="2026-04-21T10:44:24.160752087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:24.190697 systemd[1]: Started cri-containerd-b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754.scope - libcontainer container b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754. Apr 21 10:44:24.224585 containerd[1989]: time="2026-04-21T10:44:24.224402458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkcqj,Uid:249cfda1-1b0d-4e84-a72d-53cc5edc3fef,Namespace:calico-system,Attempt:0,} returns sandbox id \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\"" Apr 21 10:44:25.728802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324113737.mount: Deactivated successfully. Apr 21 10:44:25.804092 kubelet[3505]: E0421 10:44:25.804042 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:26.937067 containerd[1989]: time="2026-04-21T10:44:26.937016872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:26.938149 containerd[1989]: time="2026-04-21T10:44:26.938089186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:44:26.939460 containerd[1989]: time="2026-04-21T10:44:26.939394473Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:26.943302 containerd[1989]: time="2026-04-21T10:44:26.942150352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:26.943302 containerd[1989]: time="2026-04-21T10:44:26.942870205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.820174189s" Apr 21 10:44:26.943302 containerd[1989]: time="2026-04-21T10:44:26.942909142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:44:26.944219 containerd[1989]: time="2026-04-21T10:44:26.944190306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:44:26.971796 containerd[1989]: time="2026-04-21T10:44:26.971751264Z" level=info msg="CreateContainer within sandbox \"adac44fb6060c4ddf2d622037bf7e737c07cffe94a8ae37df41b2ef02edd1bc8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:44:27.002870 containerd[1989]: time="2026-04-21T10:44:27.002817564Z" level=info msg="CreateContainer within sandbox \"adac44fb6060c4ddf2d622037bf7e737c07cffe94a8ae37df41b2ef02edd1bc8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713\"" Apr 21 10:44:27.003784 containerd[1989]: time="2026-04-21T10:44:27.003745928Z" level=info msg="StartContainer for \"07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713\"" Apr 21 10:44:27.043261 systemd[1]: Started cri-containerd-07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713.scope - libcontainer container 07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713. Apr 21 10:44:27.101284 containerd[1989]: time="2026-04-21T10:44:27.100971534Z" level=info msg="StartContainer for \"07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713\" returns successfully" Apr 21 10:44:27.803014 kubelet[3505]: E0421 10:44:27.802962 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:27.925662 kubelet[3505]: E0421 10:44:27.924858 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.925662 kubelet[3505]: W0421 10:44:27.924896 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.925662 kubelet[3505]: E0421 10:44:27.924927 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.925662 kubelet[3505]: E0421 10:44:27.925373 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.925662 kubelet[3505]: W0421 10:44:27.925388 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.925662 kubelet[3505]: E0421 10:44:27.925406 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.927197 kubelet[3505]: E0421 10:44:27.926110 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.927197 kubelet[3505]: W0421 10:44:27.926127 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.927197 kubelet[3505]: E0421 10:44:27.926143 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.927197 kubelet[3505]: E0421 10:44:27.926914 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.927197 kubelet[3505]: W0421 10:44:27.926928 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.927197 kubelet[3505]: E0421 10:44:27.926943 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.928315 kubelet[3505]: E0421 10:44:27.927406 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.928315 kubelet[3505]: W0421 10:44:27.927418 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.928315 kubelet[3505]: E0421 10:44:27.927464 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.928975 kubelet[3505]: E0421 10:44:27.928671 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.928975 kubelet[3505]: W0421 10:44:27.928686 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.928975 kubelet[3505]: E0421 10:44:27.928700 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.928975 kubelet[3505]: E0421 10:44:27.928964 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.929814 kubelet[3505]: W0421 10:44:27.928993 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.929814 kubelet[3505]: E0421 10:44:27.929008 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.929814 kubelet[3505]: E0421 10:44:27.929394 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.929814 kubelet[3505]: W0421 10:44:27.929405 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.929814 kubelet[3505]: E0421 10:44:27.929495 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.930358 kubelet[3505]: E0421 10:44:27.930253 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.930358 kubelet[3505]: W0421 10:44:27.930286 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.930358 kubelet[3505]: E0421 10:44:27.930304 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.930781 kubelet[3505]: E0421 10:44:27.930767 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.930962 kubelet[3505]: W0421 10:44:27.930860 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.930962 kubelet[3505]: E0421 10:44:27.930878 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.931333 kubelet[3505]: E0421 10:44:27.931220 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.931333 kubelet[3505]: W0421 10:44:27.931233 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.931333 kubelet[3505]: E0421 10:44:27.931247 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.931760 kubelet[3505]: E0421 10:44:27.931636 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.931760 kubelet[3505]: W0421 10:44:27.931650 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.931760 kubelet[3505]: E0421 10:44:27.931663 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.932259 kubelet[3505]: E0421 10:44:27.932143 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.932259 kubelet[3505]: W0421 10:44:27.932157 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.932259 kubelet[3505]: E0421 10:44:27.932171 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.933077 kubelet[3505]: E0421 10:44:27.932933 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.933077 kubelet[3505]: W0421 10:44:27.932947 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.933077 kubelet[3505]: E0421 10:44:27.932969 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.933379 kubelet[3505]: E0421 10:44:27.933361 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.933464 kubelet[3505]: W0421 10:44:27.933386 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.933464 kubelet[3505]: E0421 10:44:27.933401 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.953031 systemd[1]: run-containerd-runc-k8s.io-07e4f5b7af9105bcc46913fba144063d11247bbdf759de3da45e1962a53b1713-runc.s1ftO8.mount: Deactivated successfully. Apr 21 10:44:27.958729 kubelet[3505]: E0421 10:44:27.958700 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.958729 kubelet[3505]: W0421 10:44:27.958722 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.958927 kubelet[3505]: E0421 10:44:27.958746 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.959123 kubelet[3505]: E0421 10:44:27.959101 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.959123 kubelet[3505]: W0421 10:44:27.959119 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.959314 kubelet[3505]: E0421 10:44:27.959137 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.959525 kubelet[3505]: E0421 10:44:27.959506 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.959525 kubelet[3505]: W0421 10:44:27.959522 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.959649 kubelet[3505]: E0421 10:44:27.959540 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.959899 kubelet[3505]: E0421 10:44:27.959880 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.959899 kubelet[3505]: W0421 10:44:27.959894 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.960013 kubelet[3505]: E0421 10:44:27.959908 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.960200 kubelet[3505]: E0421 10:44:27.960181 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.960200 kubelet[3505]: W0421 10:44:27.960196 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.960307 kubelet[3505]: E0421 10:44:27.960210 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.960500 kubelet[3505]: E0421 10:44:27.960481 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.960500 kubelet[3505]: W0421 10:44:27.960495 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.960828 kubelet[3505]: E0421 10:44:27.960510 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.960925 kubelet[3505]: E0421 10:44:27.960901 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.960925 kubelet[3505]: W0421 10:44:27.960916 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.961016 kubelet[3505]: E0421 10:44:27.960930 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.961209 kubelet[3505]: E0421 10:44:27.961190 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.961209 kubelet[3505]: W0421 10:44:27.961204 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.961330 kubelet[3505]: E0421 10:44:27.961218 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.961538 kubelet[3505]: E0421 10:44:27.961520 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.961612 kubelet[3505]: W0421 10:44:27.961547 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.961612 kubelet[3505]: E0421 10:44:27.961564 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.961971 kubelet[3505]: E0421 10:44:27.961954 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.961971 kubelet[3505]: W0421 10:44:27.961967 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.962086 kubelet[3505]: E0421 10:44:27.961981 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.962255 kubelet[3505]: E0421 10:44:27.962237 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.962255 kubelet[3505]: W0421 10:44:27.962253 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.962365 kubelet[3505]: E0421 10:44:27.962268 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.962523 kubelet[3505]: E0421 10:44:27.962507 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.962523 kubelet[3505]: W0421 10:44:27.962520 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.962642 kubelet[3505]: E0421 10:44:27.962533 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.962790 kubelet[3505]: E0421 10:44:27.962772 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.962790 kubelet[3505]: W0421 10:44:27.962787 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.962895 kubelet[3505]: E0421 10:44:27.962800 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.963094 kubelet[3505]: E0421 10:44:27.963076 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.963094 kubelet[3505]: W0421 10:44:27.963090 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.963200 kubelet[3505]: E0421 10:44:27.963104 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.963940 kubelet[3505]: E0421 10:44:27.963921 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.963940 kubelet[3505]: W0421 10:44:27.963935 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.964119 kubelet[3505]: E0421 10:44:27.963949 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.964184 kubelet[3505]: E0421 10:44:27.964170 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.964235 kubelet[3505]: W0421 10:44:27.964185 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.964235 kubelet[3505]: E0421 10:44:27.964198 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.964500 kubelet[3505]: E0421 10:44:27.964476 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.964500 kubelet[3505]: W0421 10:44:27.964490 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.964819 kubelet[3505]: E0421 10:44:27.964502 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:27.965062 kubelet[3505]: E0421 10:44:27.965043 3505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:44:27.965062 kubelet[3505]: W0421 10:44:27.965057 3505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:44:27.965198 kubelet[3505]: E0421 10:44:27.965072 3505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:44:28.480972 containerd[1989]: time="2026-04-21T10:44:28.480910042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:28.483265 containerd[1989]: time="2026-04-21T10:44:28.483194221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:44:28.486129 containerd[1989]: time="2026-04-21T10:44:28.486064731Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:28.489827 containerd[1989]: time="2026-04-21T10:44:28.489763079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:28.491159 containerd[1989]: time="2026-04-21T10:44:28.490552502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.546320458s" Apr 21 10:44:28.491159 containerd[1989]: time="2026-04-21T10:44:28.490596035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:44:28.498921 containerd[1989]: time="2026-04-21T10:44:28.498870597Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:44:28.535897 containerd[1989]: time="2026-04-21T10:44:28.535843786Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4\"" Apr 21 10:44:28.536835 containerd[1989]: time="2026-04-21T10:44:28.536803259Z" level=info msg="StartContainer for \"9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4\"" Apr 21 10:44:28.601683 systemd[1]: Started cri-containerd-9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4.scope - libcontainer container 9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4. Apr 21 10:44:28.644412 containerd[1989]: time="2026-04-21T10:44:28.644356279Z" level=info msg="StartContainer for \"9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4\" returns successfully" Apr 21 10:44:28.657407 systemd[1]: cri-containerd-9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4.scope: Deactivated successfully. Apr 21 10:44:28.740666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4-rootfs.mount: Deactivated successfully. Apr 21 10:44:28.890154 containerd[1989]: time="2026-04-21T10:44:28.877566205Z" level=info msg="shim disconnected" id=9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4 namespace=k8s.io Apr 21 10:44:28.890154 containerd[1989]: time="2026-04-21T10:44:28.890151419Z" level=warning msg="cleaning up after shim disconnected" id=9a203ae54bfbeba4ded21f3435efd15e715a38577bb0165fe0ddc5efd3d1e7e4 namespace=k8s.io Apr 21 10:44:28.890478 containerd[1989]: time="2026-04-21T10:44:28.890177741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:44:28.921548 kubelet[3505]: I0421 10:44:28.919925 3505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:44:28.922883 containerd[1989]: time="2026-04-21T10:44:28.922849586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:44:28.945625 kubelet[3505]: I0421 10:44:28.944785 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69c69d6d5f-g2x85" podStartSLOduration=3.121024358 podStartE2EDuration="5.944762158s" podCreationTimestamp="2026-04-21 10:44:23 +0000 UTC" firstStartedPulling="2026-04-21 10:44:24.120314034 +0000 UTC m=+18.550339117" lastFinishedPulling="2026-04-21 10:44:26.944051828 +0000 UTC m=+21.374076917" observedRunningTime="2026-04-21 10:44:27.928671591 +0000 UTC m=+22.358696686" watchObservedRunningTime="2026-04-21 10:44:28.944762158 +0000 UTC m=+23.374787254" Apr 21 10:44:29.803996 kubelet[3505]: E0421 10:44:29.801846 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:31.803485 kubelet[3505]: E0421 10:44:31.802332 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:33.803083 kubelet[3505]: E0421 10:44:33.801735 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:35.802615 kubelet[3505]: E0421 10:44:35.802148 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:37.801589 kubelet[3505]: E0421 10:44:37.801488 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:39.675868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593497431.mount: Deactivated successfully. Apr 21 10:44:39.742590 containerd[1989]: time="2026-04-21T10:44:39.742530148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:39.759961 containerd[1989]: time="2026-04-21T10:44:39.759883592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:44:39.804732 kubelet[3505]: E0421 10:44:39.804672 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:39.850123 containerd[1989]: time="2026-04-21T10:44:39.846221163Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:39.857025 containerd[1989]: time="2026-04-21T10:44:39.856953968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:39.858022 containerd[1989]: time="2026-04-21T10:44:39.857966965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.935077029s" Apr 21 10:44:39.858220 containerd[1989]: time="2026-04-21T10:44:39.858187631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:44:39.865769 containerd[1989]: time="2026-04-21T10:44:39.865726667Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:44:40.038099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003882776.mount: Deactivated successfully. Apr 21 10:44:40.050688 containerd[1989]: time="2026-04-21T10:44:40.050632321Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883\"" Apr 21 10:44:40.051807 containerd[1989]: time="2026-04-21T10:44:40.051675721Z" level=info msg="StartContainer for \"d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883\"" Apr 21 10:44:40.115662 systemd[1]: Started cri-containerd-d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883.scope - libcontainer container d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883. Apr 21 10:44:40.162820 containerd[1989]: time="2026-04-21T10:44:40.162639916Z" level=info msg="StartContainer for \"d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883\" returns successfully" Apr 21 10:44:40.211162 systemd[1]: cri-containerd-d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883.scope: Deactivated successfully. Apr 21 10:44:40.271920 containerd[1989]: time="2026-04-21T10:44:40.271843035Z" level=info msg="shim disconnected" id=d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883 namespace=k8s.io Apr 21 10:44:40.271920 containerd[1989]: time="2026-04-21T10:44:40.271914319Z" level=warning msg="cleaning up after shim disconnected" id=d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883 namespace=k8s.io Apr 21 10:44:40.271920 containerd[1989]: time="2026-04-21T10:44:40.271927169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:44:40.676347 systemd[1]: run-containerd-runc-k8s.io-d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883-runc.lheN6d.mount: Deactivated successfully. Apr 21 10:44:40.676609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67dea2e0a6d9c3a5d0df4de86457c169175bbed2f33dae8247c9cbc798fc883-rootfs.mount: Deactivated successfully. Apr 21 10:44:40.952288 containerd[1989]: time="2026-04-21T10:44:40.951925159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:44:41.802704 kubelet[3505]: E0421 10:44:41.801374 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:43.802837 kubelet[3505]: E0421 10:44:43.802701 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:44.867412 containerd[1989]: time="2026-04-21T10:44:44.867357325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:44.869388 containerd[1989]: time="2026-04-21T10:44:44.869212552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:44:44.871672 containerd[1989]: time="2026-04-21T10:44:44.871625448Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:44.875599 containerd[1989]: time="2026-04-21T10:44:44.875530286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:44.876838 containerd[1989]: time="2026-04-21T10:44:44.876392723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.924424262s" Apr 21 10:44:44.876838 containerd[1989]: time="2026-04-21T10:44:44.876599065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:44:44.892236 containerd[1989]: time="2026-04-21T10:44:44.892157222Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:44:44.923093 containerd[1989]: time="2026-04-21T10:44:44.923037635Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38\"" Apr 21 10:44:44.923856 containerd[1989]: time="2026-04-21T10:44:44.923812773Z" level=info msg="StartContainer for \"b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38\"" Apr 21 10:44:44.962993 systemd[1]: Started cri-containerd-b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38.scope - libcontainer container b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38. Apr 21 10:44:45.003346 containerd[1989]: time="2026-04-21T10:44:45.003290002Z" level=info msg="StartContainer for \"b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38\" returns successfully" Apr 21 10:44:45.803119 kubelet[3505]: E0421 10:44:45.803055 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:46.166590 systemd[1]: cri-containerd-b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38.scope: Deactivated successfully. Apr 21 10:44:46.195089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38-rootfs.mount: Deactivated successfully. Apr 21 10:44:46.222520 containerd[1989]: time="2026-04-21T10:44:46.222431776Z" level=info msg="shim disconnected" id=b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38 namespace=k8s.io Apr 21 10:44:46.222520 containerd[1989]: time="2026-04-21T10:44:46.222512604Z" level=warning msg="cleaning up after shim disconnected" id=b23485e85368341916e7367ea670a4c729bd2e476d9d535a1eecd5c5dab5cc38 namespace=k8s.io Apr 21 10:44:46.222520 containerd[1989]: time="2026-04-21T10:44:46.222525320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:44:46.251549 kubelet[3505]: I0421 10:44:46.248736 3505 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 10:44:46.423644 systemd[1]: Created slice kubepods-burstable-pod81f2aec6_921e_4349_a810_22bcdec6b773.slice - libcontainer container kubepods-burstable-pod81f2aec6_921e_4349_a810_22bcdec6b773.slice. Apr 21 10:44:46.444556 systemd[1]: Created slice kubepods-burstable-pod439e9caa_c7e2_48c1_a515_3023dbf91270.slice - libcontainer container kubepods-burstable-pod439e9caa_c7e2_48c1_a515_3023dbf91270.slice. Apr 21 10:44:46.446513 kubelet[3505]: I0421 10:44:46.444413 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68a2897e-5688-4d69-aa35-2e241e661e25-calico-apiserver-certs\") pod \"calico-apiserver-747d6cc58b-rdqmf\" (UID: \"68a2897e-5688-4d69-aa35-2e241e661e25\") " pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" Apr 21 10:44:46.446663 kubelet[3505]: I0421 10:44:46.446560 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcceb021-be6c-412d-b4fd-efcc9879606c-tigera-ca-bundle\") pod \"calico-kube-controllers-6d7b6bb87f-lgg8j\" (UID: \"fcceb021-be6c-412d-b4fd-efcc9879606c\") " pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" Apr 21 10:44:46.446663 kubelet[3505]: I0421 10:44:46.446592 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vxd5\" (UniqueName: \"kubernetes.io/projected/fcceb021-be6c-412d-b4fd-efcc9879606c-kube-api-access-5vxd5\") pod \"calico-kube-controllers-6d7b6bb87f-lgg8j\" (UID: \"fcceb021-be6c-412d-b4fd-efcc9879606c\") " pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" Apr 21 10:44:46.446663 kubelet[3505]: I0421 10:44:46.446626 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81f2aec6-921e-4349-a810-22bcdec6b773-config-volume\") pod \"coredns-66bc5c9577-r56p8\" (UID: \"81f2aec6-921e-4349-a810-22bcdec6b773\") " pod="kube-system/coredns-66bc5c9577-r56p8" Apr 21 10:44:46.446663 kubelet[3505]: I0421 10:44:46.446656 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/439e9caa-c7e2-48c1-a515-3023dbf91270-config-volume\") pod \"coredns-66bc5c9577-7gjwc\" (UID: \"439e9caa-c7e2-48c1-a515-3023dbf91270\") " pod="kube-system/coredns-66bc5c9577-7gjwc" Apr 21 10:44:46.446886 kubelet[3505]: I0421 10:44:46.446681 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6zj\" (UniqueName: \"kubernetes.io/projected/439e9caa-c7e2-48c1-a515-3023dbf91270-kube-api-access-8k6zj\") pod \"coredns-66bc5c9577-7gjwc\" (UID: \"439e9caa-c7e2-48c1-a515-3023dbf91270\") " pod="kube-system/coredns-66bc5c9577-7gjwc" Apr 21 10:44:46.446886 kubelet[3505]: I0421 10:44:46.446703 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5194f4a-5b68-43fb-8b6d-2794530d8be1-calico-apiserver-certs\") pod \"calico-apiserver-747d6cc58b-fvqgg\" (UID: \"d5194f4a-5b68-43fb-8b6d-2794530d8be1\") " pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" Apr 21 10:44:46.446886 kubelet[3505]: I0421 10:44:46.446737 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf7c\" (UniqueName: \"kubernetes.io/projected/81f2aec6-921e-4349-a810-22bcdec6b773-kube-api-access-vtf7c\") pod \"coredns-66bc5c9577-r56p8\" (UID: \"81f2aec6-921e-4349-a810-22bcdec6b773\") " pod="kube-system/coredns-66bc5c9577-r56p8" Apr 21 10:44:46.446886 kubelet[3505]: I0421 10:44:46.446761 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2cg\" (UniqueName: \"kubernetes.io/projected/d5194f4a-5b68-43fb-8b6d-2794530d8be1-kube-api-access-fz2cg\") pod \"calico-apiserver-747d6cc58b-fvqgg\" (UID: \"d5194f4a-5b68-43fb-8b6d-2794530d8be1\") " pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" Apr 21 10:44:46.446886 kubelet[3505]: I0421 10:44:46.446797 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz57b\" (UniqueName: \"kubernetes.io/projected/68a2897e-5688-4d69-aa35-2e241e661e25-kube-api-access-tz57b\") pod \"calico-apiserver-747d6cc58b-rdqmf\" (UID: \"68a2897e-5688-4d69-aa35-2e241e661e25\") " pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" Apr 21 10:44:46.456633 systemd[1]: Created slice kubepods-besteffort-podd5194f4a_5b68_43fb_8b6d_2794530d8be1.slice - libcontainer container kubepods-besteffort-podd5194f4a_5b68_43fb_8b6d_2794530d8be1.slice. Apr 21 10:44:46.467951 systemd[1]: Created slice kubepods-besteffort-pod68a2897e_5688_4d69_aa35_2e241e661e25.slice - libcontainer container kubepods-besteffort-pod68a2897e_5688_4d69_aa35_2e241e661e25.slice. Apr 21 10:44:46.478135 systemd[1]: Created slice kubepods-besteffort-podb96b65f2_d7e3_4f8e_880f_b3f8c756fb62.slice - libcontainer container kubepods-besteffort-podb96b65f2_d7e3_4f8e_880f_b3f8c756fb62.slice. Apr 21 10:44:46.490735 systemd[1]: Created slice kubepods-besteffort-podfcceb021_be6c_412d_b4fd_efcc9879606c.slice - libcontainer container kubepods-besteffort-podfcceb021_be6c_412d_b4fd_efcc9879606c.slice. Apr 21 10:44:46.499143 systemd[1]: Created slice kubepods-besteffort-pod98f2d34b_4c18_4d13_a400_a3baedae5fec.slice - libcontainer container kubepods-besteffort-pod98f2d34b_4c18_4d13_a400_a3baedae5fec.slice. Apr 21 10:44:46.547468 kubelet[3505]: I0421 10:44:46.547414 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-nginx-config\") pod \"whisker-6f7d6885f6-d56b5\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:46.547920 kubelet[3505]: I0421 10:44:46.547895 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-ca-bundle\") pod \"whisker-6f7d6885f6-d56b5\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:46.548679 kubelet[3505]: I0421 10:44:46.547935 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96b65f2-d7e3-4f8e-880f-b3f8c756fb62-config\") pod \"goldmane-cccfbd5cf-gmpsx\" (UID: \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\") " pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:46.548679 kubelet[3505]: I0421 10:44:46.547958 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96b65f2-d7e3-4f8e-880f-b3f8c756fb62-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-gmpsx\" (UID: \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\") " pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:46.548679 kubelet[3505]: I0421 10:44:46.547980 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc45q\" (UniqueName: \"kubernetes.io/projected/b96b65f2-d7e3-4f8e-880f-b3f8c756fb62-kube-api-access-wc45q\") pod \"goldmane-cccfbd5cf-gmpsx\" (UID: \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\") " pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:46.548679 kubelet[3505]: I0421 10:44:46.548012 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b96b65f2-d7e3-4f8e-880f-b3f8c756fb62-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-gmpsx\" (UID: \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\") " pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:46.548679 kubelet[3505]: I0421 10:44:46.548052 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-backend-key-pair\") pod \"whisker-6f7d6885f6-d56b5\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:46.549793 kubelet[3505]: I0421 10:44:46.548111 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgz4r\" (UniqueName: \"kubernetes.io/projected/98f2d34b-4c18-4d13-a400-a3baedae5fec-kube-api-access-zgz4r\") pod \"whisker-6f7d6885f6-d56b5\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:46.748541 containerd[1989]: time="2026-04-21T10:44:46.747558869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r56p8,Uid:81f2aec6-921e-4349-a810-22bcdec6b773,Namespace:kube-system,Attempt:0,}" Apr 21 10:44:46.756753 containerd[1989]: time="2026-04-21T10:44:46.756706012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7gjwc,Uid:439e9caa-c7e2-48c1-a515-3023dbf91270,Namespace:kube-system,Attempt:0,}" Apr 21 10:44:46.768796 containerd[1989]: time="2026-04-21T10:44:46.768753082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-fvqgg,Uid:d5194f4a-5b68-43fb-8b6d-2794530d8be1,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:46.778906 containerd[1989]: time="2026-04-21T10:44:46.778841757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-rdqmf,Uid:68a2897e-5688-4d69-aa35-2e241e661e25,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:46.791018 containerd[1989]: time="2026-04-21T10:44:46.790974169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-gmpsx,Uid:b96b65f2-d7e3-4f8e-880f-b3f8c756fb62,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:46.800312 containerd[1989]: time="2026-04-21T10:44:46.800262837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7b6bb87f-lgg8j,Uid:fcceb021-be6c-412d-b4fd-efcc9879606c,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:46.807257 containerd[1989]: time="2026-04-21T10:44:46.807216074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f7d6885f6-d56b5,Uid:98f2d34b-4c18-4d13-a400-a3baedae5fec,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:47.030347 containerd[1989]: time="2026-04-21T10:44:47.030046382Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:44:47.060616 containerd[1989]: time="2026-04-21T10:44:47.060367977Z" level=info msg="CreateContainer within sandbox \"b54c6a791fd0fc7b89532018f27ac912b0b40e88554684755d4338d765674754\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d\"" Apr 21 10:44:47.062655 containerd[1989]: time="2026-04-21T10:44:47.061071582Z" level=info msg="StartContainer for \"d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d\"" Apr 21 10:44:47.093625 systemd[1]: Started cri-containerd-d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d.scope - libcontainer container d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d. Apr 21 10:44:47.131939 containerd[1989]: time="2026-04-21T10:44:47.131890885Z" level=info msg="StartContainer for \"d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d\" returns successfully" Apr 21 10:44:47.809596 systemd[1]: Created slice kubepods-besteffort-podfbe93840_23f5_4bbe_b319_4df10f6383eb.slice - libcontainer container kubepods-besteffort-podfbe93840_23f5_4bbe_b319_4df10f6383eb.slice. Apr 21 10:44:47.815797 containerd[1989]: time="2026-04-21T10:44:47.815755456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-68cq7,Uid:fbe93840-23f5-4bbe-b319-4df10f6383eb,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:48.038657 kubelet[3505]: I0421 10:44:48.036139 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xkcqj" podStartSLOduration=4.3691269 podStartE2EDuration="25.029607755s" podCreationTimestamp="2026-04-21 10:44:23 +0000 UTC" firstStartedPulling="2026-04-21 10:44:24.226004123 +0000 UTC m=+18.656029204" lastFinishedPulling="2026-04-21 10:44:44.886484974 +0000 UTC m=+39.316510059" observedRunningTime="2026-04-21 10:44:48.029374782 +0000 UTC m=+42.459399876" watchObservedRunningTime="2026-04-21 10:44:48.029607755 +0000 UTC m=+42.459632849" Apr 21 10:44:50.023458 kubelet[3505]: I0421 10:44:50.023390 3505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:44:50.510811 containerd[1989]: time="2026-04-21T10:44:50.510748148Z" level=error msg="Failed to destroy network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.522586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41-shm.mount: Deactivated successfully. Apr 21 10:44:50.534472 containerd[1989]: time="2026-04-21T10:44:50.530542664Z" level=error msg="Failed to destroy network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.535330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d-shm.mount: Deactivated successfully. Apr 21 10:44:50.552516 containerd[1989]: time="2026-04-21T10:44:50.551422188Z" level=error msg="Failed to destroy network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.556092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0-shm.mount: Deactivated successfully. Apr 21 10:44:50.558591 containerd[1989]: time="2026-04-21T10:44:50.558374859Z" level=error msg="encountered an error cleaning up failed sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.558591 containerd[1989]: time="2026-04-21T10:44:50.558491800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r56p8,Uid:81f2aec6-921e-4349-a810-22bcdec6b773,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559206 containerd[1989]: time="2026-04-21T10:44:50.558944982Z" level=error msg="encountered an error cleaning up failed sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559206 containerd[1989]: time="2026-04-21T10:44:50.559018608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7b6bb87f-lgg8j,Uid:fcceb021-be6c-412d-b4fd-efcc9879606c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559206 containerd[1989]: time="2026-04-21T10:44:50.559172178Z" level=error msg="Failed to destroy network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559584 containerd[1989]: time="2026-04-21T10:44:50.559508598Z" level=error msg="encountered an error cleaning up failed sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559584 containerd[1989]: time="2026-04-21T10:44:50.559559765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-rdqmf,Uid:68a2897e-5688-4d69-aa35-2e241e661e25,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559708 containerd[1989]: time="2026-04-21T10:44:50.559612613Z" level=error msg="encountered an error cleaning up failed sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.559708 containerd[1989]: time="2026-04-21T10:44:50.559650462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-68cq7,Uid:fbe93840-23f5-4bbe-b319-4df10f6383eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.567289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf-shm.mount: Deactivated successfully. Apr 21 10:44:50.582500 containerd[1989]: time="2026-04-21T10:44:50.582428067Z" level=error msg="Failed to destroy network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.586373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41-shm.mount: Deactivated successfully. Apr 21 10:44:50.588101 containerd[1989]: time="2026-04-21T10:44:50.586757251Z" level=error msg="encountered an error cleaning up failed sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.588101 containerd[1989]: time="2026-04-21T10:44:50.586839128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7gjwc,Uid:439e9caa-c7e2-48c1-a515-3023dbf91270,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.592599 kubelet[3505]: E0421 10:44:50.592327 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.592599 kubelet[3505]: E0421 10:44:50.592505 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.593515 containerd[1989]: time="2026-04-21T10:44:50.593471357Z" level=error msg="Failed to destroy network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.596255 kubelet[3505]: E0421 10:44:50.593279 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-r56p8" Apr 21 10:44:50.596255 kubelet[3505]: E0421 10:44:50.595030 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-r56p8" Apr 21 10:44:50.596255 kubelet[3505]: E0421 10:44:50.595110 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-r56p8_kube-system(81f2aec6-921e-4349-a810-22bcdec6b773)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-r56p8_kube-system(81f2aec6-921e-4349-a810-22bcdec6b773)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-r56p8" podUID="81f2aec6-921e-4349-a810-22bcdec6b773" Apr 21 10:44:50.597539 containerd[1989]: time="2026-04-21T10:44:50.595485307Z" level=error msg="encountered an error cleaning up failed sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.597539 containerd[1989]: time="2026-04-21T10:44:50.596149093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f7d6885f6-d56b5,Uid:98f2d34b-4c18-4d13-a400-a3baedae5fec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.597674 kubelet[3505]: E0421 10:44:50.593279 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7gjwc" Apr 21 10:44:50.597674 kubelet[3505]: E0421 10:44:50.595394 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7gjwc" Apr 21 10:44:50.597674 kubelet[3505]: E0421 10:44:50.595579 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7gjwc_kube-system(439e9caa-c7e2-48c1-a515-3023dbf91270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7gjwc_kube-system(439e9caa-c7e2-48c1-a515-3023dbf91270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7gjwc" podUID="439e9caa-c7e2-48c1-a515-3023dbf91270" Apr 21 10:44:50.597860 kubelet[3505]: E0421 10:44:50.595667 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.597860 kubelet[3505]: E0421 10:44:50.595700 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" Apr 21 10:44:50.597860 kubelet[3505]: E0421 10:44:50.595721 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" Apr 21 10:44:50.598021 kubelet[3505]: E0421 10:44:50.595765 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d7b6bb87f-lgg8j_calico-system(fcceb021-be6c-412d-b4fd-efcc9879606c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d7b6bb87f-lgg8j_calico-system(fcceb021-be6c-412d-b4fd-efcc9879606c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" podUID="fcceb021-be6c-412d-b4fd-efcc9879606c" Apr 21 10:44:50.598021 kubelet[3505]: E0421 10:44:50.595804 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.598021 kubelet[3505]: E0421 10:44:50.595824 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" Apr 21 10:44:50.598200 kubelet[3505]: E0421 10:44:50.595842 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" Apr 21 10:44:50.598200 kubelet[3505]: E0421 10:44:50.595876 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d6cc58b-rdqmf_calico-system(68a2897e-5688-4d69-aa35-2e241e661e25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d6cc58b-rdqmf_calico-system(68a2897e-5688-4d69-aa35-2e241e661e25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" podUID="68a2897e-5688-4d69-aa35-2e241e661e25" Apr 21 10:44:50.598200 kubelet[3505]: E0421 10:44:50.595911 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.598374 kubelet[3505]: E0421 10:44:50.595930 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:50.598374 kubelet[3505]: E0421 10:44:50.595946 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-68cq7" Apr 21 10:44:50.598374 kubelet[3505]: E0421 10:44:50.595981 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-68cq7_calico-system(fbe93840-23f5-4bbe-b319-4df10f6383eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-68cq7_calico-system(fbe93840-23f5-4bbe-b319-4df10f6383eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:50.603257 kubelet[3505]: E0421 10:44:50.599248 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.603257 kubelet[3505]: E0421 10:44:50.599404 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:50.603257 kubelet[3505]: E0421 10:44:50.599754 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f7d6885f6-d56b5" Apr 21 10:44:50.603365 containerd[1989]: time="2026-04-21T10:44:50.601163693Z" level=error msg="Failed to destroy network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.603403 kubelet[3505]: E0421 10:44:50.600200 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f7d6885f6-d56b5_calico-system(98f2d34b-4c18-4d13-a400-a3baedae5fec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f7d6885f6-d56b5_calico-system(98f2d34b-4c18-4d13-a400-a3baedae5fec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f7d6885f6-d56b5" podUID="98f2d34b-4c18-4d13-a400-a3baedae5fec" Apr 21 10:44:50.603858 containerd[1989]: time="2026-04-21T10:44:50.603811627Z" level=error msg="encountered an error cleaning up failed sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.603950 containerd[1989]: time="2026-04-21T10:44:50.603888434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-fvqgg,Uid:d5194f4a-5b68-43fb-8b6d-2794530d8be1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.604166 kubelet[3505]: E0421 10:44:50.604127 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.604228 kubelet[3505]: E0421 10:44:50.604181 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" Apr 21 10:44:50.604228 kubelet[3505]: E0421 10:44:50.604211 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" Apr 21 10:44:50.604331 kubelet[3505]: E0421 10:44:50.604274 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747d6cc58b-fvqgg_calico-system(d5194f4a-5b68-43fb-8b6d-2794530d8be1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747d6cc58b-fvqgg_calico-system(d5194f4a-5b68-43fb-8b6d-2794530d8be1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" podUID="d5194f4a-5b68-43fb-8b6d-2794530d8be1" Apr 21 10:44:50.605036 containerd[1989]: time="2026-04-21T10:44:50.605002507Z" level=error msg="Failed to destroy network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.605325 containerd[1989]: time="2026-04-21T10:44:50.605291450Z" level=error msg="encountered an error cleaning up failed sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.605394 containerd[1989]: time="2026-04-21T10:44:50.605351590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-gmpsx,Uid:b96b65f2-d7e3-4f8e-880f-b3f8c756fb62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.605671 kubelet[3505]: E0421 10:44:50.605631 3505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:50.605747 kubelet[3505]: E0421 10:44:50.605693 3505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:50.605747 kubelet[3505]: E0421 10:44:50.605718 3505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-gmpsx" Apr 21 10:44:50.605880 kubelet[3505]: E0421 10:44:50.605796 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-gmpsx_calico-system(b96b65f2-d7e3-4f8e-880f-b3f8c756fb62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-gmpsx_calico-system(b96b65f2-d7e3-4f8e-880f-b3f8c756fb62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-gmpsx" podUID="b96b65f2-d7e3-4f8e-880f-b3f8c756fb62" Apr 21 10:44:51.030083 containerd[1989]: time="2026-04-21T10:44:51.029754930Z" level=info msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" Apr 21 10:44:51.033291 containerd[1989]: time="2026-04-21T10:44:51.033240733Z" level=info msg="Ensure that sandbox 8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a in task-service has been cleanup successfully" Apr 21 10:44:51.033682 kubelet[3505]: I0421 10:44:51.033645 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:51.034095 kubelet[3505]: I0421 10:44:51.033751 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:44:51.038647 containerd[1989]: time="2026-04-21T10:44:51.038481039Z" level=info msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" Apr 21 10:44:51.038951 containerd[1989]: time="2026-04-21T10:44:51.038729008Z" level=info msg="Ensure that sandbox 2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e in task-service has been cleanup successfully" Apr 21 10:44:51.050395 kubelet[3505]: I0421 10:44:51.050296 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:44:51.052322 containerd[1989]: time="2026-04-21T10:44:51.052235273Z" level=info msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" Apr 21 10:44:51.054967 containerd[1989]: time="2026-04-21T10:44:51.054925750Z" level=info msg="Ensure that sandbox e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0 in task-service has been cleanup successfully" Apr 21 10:44:51.058521 kubelet[3505]: I0421 10:44:51.058398 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:44:51.060382 containerd[1989]: time="2026-04-21T10:44:51.060343616Z" level=info msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" Apr 21 10:44:51.060683 containerd[1989]: time="2026-04-21T10:44:51.060601337Z" level=info msg="Ensure that sandbox 575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf in task-service has been cleanup successfully" Apr 21 10:44:51.066834 kubelet[3505]: I0421 10:44:51.066783 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:44:51.072144 containerd[1989]: time="2026-04-21T10:44:51.072099832Z" level=info msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" Apr 21 10:44:51.072452 containerd[1989]: time="2026-04-21T10:44:51.072317223Z" level=info msg="Ensure that sandbox ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c in task-service has been cleanup successfully" Apr 21 10:44:51.081016 kubelet[3505]: I0421 10:44:51.080961 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:44:51.083904 containerd[1989]: time="2026-04-21T10:44:51.083710568Z" level=info msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" Apr 21 10:44:51.088807 containerd[1989]: time="2026-04-21T10:44:51.088621968Z" level=info msg="Ensure that sandbox e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41 in task-service has been cleanup successfully" Apr 21 10:44:51.098623 kubelet[3505]: I0421 10:44:51.098589 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:44:51.103809 containerd[1989]: time="2026-04-21T10:44:51.103758924Z" level=info msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" Apr 21 10:44:51.107801 kubelet[3505]: I0421 10:44:51.107281 3505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:44:51.110719 containerd[1989]: time="2026-04-21T10:44:51.110570931Z" level=info msg="Ensure that sandbox 8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41 in task-service has been cleanup successfully" Apr 21 10:44:51.114485 containerd[1989]: time="2026-04-21T10:44:51.114370430Z" level=info msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" Apr 21 10:44:51.114827 containerd[1989]: time="2026-04-21T10:44:51.114611741Z" level=info msg="Ensure that sandbox aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d in task-service has been cleanup successfully" Apr 21 10:44:51.139325 containerd[1989]: time="2026-04-21T10:44:51.139182617Z" level=error msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" failed" error="failed to destroy network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.140186 kubelet[3505]: E0421 10:44:51.139786 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:51.140186 kubelet[3505]: E0421 10:44:51.139855 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a"} Apr 21 10:44:51.140186 kubelet[3505]: E0421 10:44:51.139922 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.140186 kubelet[3505]: E0421 10:44:51.139963 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f7d6885f6-d56b5" podUID="98f2d34b-4c18-4d13-a400-a3baedae5fec" Apr 21 10:44:51.192495 containerd[1989]: time="2026-04-21T10:44:51.192335846Z" level=error msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" failed" error="failed to destroy network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.193037 kubelet[3505]: E0421 10:44:51.192864 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:44:51.193037 kubelet[3505]: E0421 10:44:51.192917 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e"} Apr 21 10:44:51.193037 kubelet[3505]: E0421 10:44:51.192959 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.193037 kubelet[3505]: E0421 10:44:51.192993 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-gmpsx" podUID="b96b65f2-d7e3-4f8e-880f-b3f8c756fb62" Apr 21 10:44:51.229147 containerd[1989]: time="2026-04-21T10:44:51.228641809Z" level=error msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" failed" error="failed to destroy network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.229285 kubelet[3505]: E0421 10:44:51.228959 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:44:51.229285 kubelet[3505]: E0421 10:44:51.229013 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d"} Apr 21 10:44:51.229285 kubelet[3505]: E0421 10:44:51.229053 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbe93840-23f5-4bbe-b319-4df10f6383eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.229285 kubelet[3505]: E0421 10:44:51.229095 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbe93840-23f5-4bbe-b319-4df10f6383eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-68cq7" podUID="fbe93840-23f5-4bbe-b319-4df10f6383eb" Apr 21 10:44:51.232528 containerd[1989]: time="2026-04-21T10:44:51.232304759Z" level=error msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" failed" error="failed to destroy network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.233139 kubelet[3505]: E0421 10:44:51.232875 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:44:51.233139 kubelet[3505]: E0421 10:44:51.232933 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf"} Apr 21 10:44:51.233139 kubelet[3505]: E0421 10:44:51.232974 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68a2897e-5688-4d69-aa35-2e241e661e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.233139 kubelet[3505]: E0421 10:44:51.233009 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68a2897e-5688-4d69-aa35-2e241e661e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" podUID="68a2897e-5688-4d69-aa35-2e241e661e25" Apr 21 10:44:51.251739 containerd[1989]: time="2026-04-21T10:44:51.251233130Z" level=error msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" failed" error="failed to destroy network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.251882 kubelet[3505]: E0421 10:44:51.251565 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:44:51.251882 kubelet[3505]: E0421 10:44:51.251618 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41"} Apr 21 10:44:51.251882 kubelet[3505]: E0421 10:44:51.251658 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"439e9caa-c7e2-48c1-a515-3023dbf91270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.251882 kubelet[3505]: E0421 10:44:51.251695 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"439e9caa-c7e2-48c1-a515-3023dbf91270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7gjwc" podUID="439e9caa-c7e2-48c1-a515-3023dbf91270" Apr 21 10:44:51.252714 containerd[1989]: time="2026-04-21T10:44:51.252290942Z" level=error msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" failed" error="failed to destroy network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.252800 kubelet[3505]: E0421 10:44:51.252571 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:44:51.252800 kubelet[3505]: E0421 10:44:51.252634 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0"} Apr 21 10:44:51.253055 kubelet[3505]: E0421 10:44:51.252682 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81f2aec6-921e-4349-a810-22bcdec6b773\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.253055 kubelet[3505]: E0421 10:44:51.253015 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81f2aec6-921e-4349-a810-22bcdec6b773\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-r56p8" podUID="81f2aec6-921e-4349-a810-22bcdec6b773" Apr 21 10:44:51.262197 containerd[1989]: time="2026-04-21T10:44:51.262067286Z" level=error msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" failed" error="failed to destroy network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.262821 kubelet[3505]: E0421 10:44:51.262570 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:44:51.262821 kubelet[3505]: E0421 10:44:51.262622 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c"} Apr 21 10:44:51.262821 kubelet[3505]: E0421 10:44:51.262662 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5194f4a-5b68-43fb-8b6d-2794530d8be1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.262821 kubelet[3505]: E0421 10:44:51.262697 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5194f4a-5b68-43fb-8b6d-2794530d8be1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" podUID="d5194f4a-5b68-43fb-8b6d-2794530d8be1" Apr 21 10:44:51.266338 containerd[1989]: time="2026-04-21T10:44:51.266292069Z" level=error msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" failed" error="failed to destroy network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:44:51.266606 kubelet[3505]: E0421 10:44:51.266566 3505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:44:51.266714 kubelet[3505]: E0421 10:44:51.266616 3505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41"} Apr 21 10:44:51.266714 kubelet[3505]: E0421 10:44:51.266676 3505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcceb021-be6c-412d-b4fd-efcc9879606c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:44:51.267567 kubelet[3505]: E0421 10:44:51.266756 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcceb021-be6c-412d-b4fd-efcc9879606c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" podUID="fcceb021-be6c-412d-b4fd-efcc9879606c" Apr 21 10:44:51.514965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a-shm.mount: Deactivated successfully. Apr 21 10:44:51.515115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e-shm.mount: Deactivated successfully. Apr 21 10:44:51.515208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c-shm.mount: Deactivated successfully. Apr 21 10:44:53.624185 kubelet[3505]: I0421 10:44:53.623712 3505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:44:55.184461 containerd[1989]: time="2026-04-21T10:44:55.182660216Z" level=info msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.706 [INFO][4746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.707 [INFO][4746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" iface="eth0" netns="/var/run/netns/cni-55b65704-5676-fc08-35de-8fc93e07f66c" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.708 [INFO][4746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" iface="eth0" netns="/var/run/netns/cni-55b65704-5676-fc08-35de-8fc93e07f66c" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.709 [INFO][4746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" iface="eth0" netns="/var/run/netns/cni-55b65704-5676-fc08-35de-8fc93e07f66c" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.709 [INFO][4746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.710 [INFO][4746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.768 [INFO][4772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.768 [INFO][4772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.768 [INFO][4772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.776 [WARNING][4772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.776 [INFO][4772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.778 [INFO][4772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:44:55.783246 containerd[1989]: 2026-04-21 10:44:55.781 [INFO][4746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:44:55.784597 containerd[1989]: time="2026-04-21T10:44:55.784536405Z" level=info msg="TearDown network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" successfully" Apr 21 10:44:55.784771 containerd[1989]: time="2026-04-21T10:44:55.784663440Z" level=info msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" returns successfully" Apr 21 10:44:55.787675 systemd[1]: run-netns-cni\x2d55b65704\x2d5676\x2dfc08\x2d35de\x2d8fc93e07f66c.mount: Deactivated successfully. Apr 21 10:44:55.922156 kubelet[3505]: I0421 10:44:55.921779 3505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-ca-bundle\") pod \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " Apr 21 10:44:55.922156 kubelet[3505]: I0421 10:44:55.921848 3505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgz4r\" (UniqueName: \"kubernetes.io/projected/98f2d34b-4c18-4d13-a400-a3baedae5fec-kube-api-access-zgz4r\") pod \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " Apr 21 10:44:55.922156 kubelet[3505]: I0421 10:44:55.921898 3505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-nginx-config\") pod \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " Apr 21 10:44:55.922156 kubelet[3505]: I0421 10:44:55.921938 3505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-backend-key-pair\") pod \"98f2d34b-4c18-4d13-a400-a3baedae5fec\" (UID: \"98f2d34b-4c18-4d13-a400-a3baedae5fec\") " Apr 21 10:44:55.932207 kubelet[3505]: I0421 10:44:55.927584 3505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "98f2d34b-4c18-4d13-a400-a3baedae5fec" (UID: "98f2d34b-4c18-4d13-a400-a3baedae5fec"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:44:55.939463 kubelet[3505]: I0421 10:44:55.936634 3505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "98f2d34b-4c18-4d13-a400-a3baedae5fec" (UID: "98f2d34b-4c18-4d13-a400-a3baedae5fec"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:44:55.940085 kubelet[3505]: I0421 10:44:55.940036 3505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "98f2d34b-4c18-4d13-a400-a3baedae5fec" (UID: "98f2d34b-4c18-4d13-a400-a3baedae5fec"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:44:55.941686 kubelet[3505]: I0421 10:44:55.941636 3505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f2d34b-4c18-4d13-a400-a3baedae5fec-kube-api-access-zgz4r" (OuterVolumeSpecName: "kube-api-access-zgz4r") pod "98f2d34b-4c18-4d13-a400-a3baedae5fec" (UID: "98f2d34b-4c18-4d13-a400-a3baedae5fec"). InnerVolumeSpecName "kube-api-access-zgz4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:44:55.942603 systemd[1]: var-lib-kubelet-pods-98f2d34b\x2d4c18\x2d4d13\x2da400\x2da3baedae5fec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzgz4r.mount: Deactivated successfully. Apr 21 10:44:55.947850 systemd[1]: var-lib-kubelet-pods-98f2d34b\x2d4c18\x2d4d13\x2da400\x2da3baedae5fec-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:44:56.022716 kubelet[3505]: I0421 10:44:56.022672 3505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-ca-bundle\") on node \"ip-172-31-20-236\" DevicePath \"\"" Apr 21 10:44:56.022716 kubelet[3505]: I0421 10:44:56.022710 3505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgz4r\" (UniqueName: \"kubernetes.io/projected/98f2d34b-4c18-4d13-a400-a3baedae5fec-kube-api-access-zgz4r\") on node \"ip-172-31-20-236\" DevicePath \"\"" Apr 21 10:44:56.022716 kubelet[3505]: I0421 10:44:56.022725 3505 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98f2d34b-4c18-4d13-a400-a3baedae5fec-nginx-config\") on node \"ip-172-31-20-236\" DevicePath \"\"" Apr 21 10:44:56.022969 kubelet[3505]: I0421 10:44:56.022739 3505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f2d34b-4c18-4d13-a400-a3baedae5fec-whisker-backend-key-pair\") on node \"ip-172-31-20-236\" DevicePath \"\"" Apr 21 10:44:56.140081 systemd[1]: Removed slice kubepods-besteffort-pod98f2d34b_4c18_4d13_a400_a3baedae5fec.slice - libcontainer container kubepods-besteffort-pod98f2d34b_4c18_4d13_a400_a3baedae5fec.slice. Apr 21 10:44:56.230592 systemd[1]: Created slice kubepods-besteffort-poda46470ab_e1e3_41d9_a8d5_ef96d18b9ecf.slice - libcontainer container kubepods-besteffort-poda46470ab_e1e3_41d9_a8d5_ef96d18b9ecf.slice. Apr 21 10:44:56.325900 kubelet[3505]: I0421 10:44:56.325829 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf-whisker-ca-bundle\") pod \"whisker-75cfc67b49-5vflf\" (UID: \"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf\") " pod="calico-system/whisker-75cfc67b49-5vflf" Apr 21 10:44:56.325900 kubelet[3505]: I0421 10:44:56.325900 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zvfr\" (UniqueName: \"kubernetes.io/projected/a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf-kube-api-access-7zvfr\") pod \"whisker-75cfc67b49-5vflf\" (UID: \"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf\") " pod="calico-system/whisker-75cfc67b49-5vflf" Apr 21 10:44:56.326221 kubelet[3505]: I0421 10:44:56.325933 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf-nginx-config\") pod \"whisker-75cfc67b49-5vflf\" (UID: \"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf\") " pod="calico-system/whisker-75cfc67b49-5vflf" Apr 21 10:44:56.326221 kubelet[3505]: I0421 10:44:56.325977 3505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf-whisker-backend-key-pair\") pod \"whisker-75cfc67b49-5vflf\" (UID: \"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf\") " pod="calico-system/whisker-75cfc67b49-5vflf" Apr 21 10:44:56.544147 containerd[1989]: time="2026-04-21T10:44:56.543611762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cfc67b49-5vflf,Uid:a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf,Namespace:calico-system,Attempt:0,}" Apr 21 10:44:57.024849 systemd-networkd[1621]: cali3515bba5897: Link UP Apr 21 10:44:57.027204 systemd-networkd[1621]: cali3515bba5897: Gained carrier Apr 21 10:44:57.039890 (udev-worker)[4898]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.615 [ERROR][4839] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.655 [INFO][4839] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0 whisker-75cfc67b49- calico-system a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf 957 0 2026-04-21 10:44:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75cfc67b49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-236 whisker-75cfc67b49-5vflf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3515bba5897 [] [] }} ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.655 [INFO][4839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.735 [INFO][4853] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" HandleID="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Workload="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.752 [INFO][4853] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" HandleID="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Workload="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005fc350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"whisker-75cfc67b49-5vflf", "timestamp":"2026-04-21 10:44:56.735385123 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00043e160)} Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.752 [INFO][4853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.752 [INFO][4853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.752 [INFO][4853] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.758 [INFO][4853] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.775 [INFO][4853] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.785 [INFO][4853] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.792 [INFO][4853] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.797 [INFO][4853] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.798 [INFO][4853] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.801 [INFO][4853] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.809 [INFO][4853] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.818 [INFO][4853] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.65/26] block=192.168.120.64/26 handle="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.818 [INFO][4853] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.65/26] handle="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" host="ip-172-31-20-236" Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.818 [INFO][4853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:44:57.058901 containerd[1989]: 2026-04-21 10:44:56.818 [INFO][4853] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.65/26] IPv6=[] ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" HandleID="k8s-pod-network.31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Workload="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:56.822 [INFO][4839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0", GenerateName:"whisker-75cfc67b49-", Namespace:"calico-system", SelfLink:"", UID:"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75cfc67b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"whisker-75cfc67b49-5vflf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3515bba5897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:56.822 [INFO][4839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.65/32] ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:56.822 [INFO][4839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3515bba5897 ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:57.028 [INFO][4839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:57.029 [INFO][4839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0", GenerateName:"whisker-75cfc67b49-", Namespace:"calico-system", SelfLink:"", UID:"a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75cfc67b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf", Pod:"whisker-75cfc67b49-5vflf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3515bba5897", MAC:"b2:ba:4c:7d:30:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:44:57.068318 containerd[1989]: 2026-04-21 10:44:57.054 [INFO][4839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf" Namespace="calico-system" Pod="whisker-75cfc67b49-5vflf" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--75cfc67b49--5vflf-eth0" Apr 21 10:44:57.151028 containerd[1989]: time="2026-04-21T10:44:57.150486015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:44:57.151028 containerd[1989]: time="2026-04-21T10:44:57.150690745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:44:57.151028 containerd[1989]: time="2026-04-21T10:44:57.150745825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:57.152080 containerd[1989]: time="2026-04-21T10:44:57.151779829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:44:57.208471 systemd[1]: run-containerd-runc-k8s.io-31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf-runc.4Gn9qc.mount: Deactivated successfully. Apr 21 10:44:57.221715 systemd[1]: Started cri-containerd-31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf.scope - libcontainer container 31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf. Apr 21 10:44:57.385105 containerd[1989]: time="2026-04-21T10:44:57.385052943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cfc67b49-5vflf,Uid:a46470ab-e1e3-41d9-a8d5-ef96d18b9ecf,Namespace:calico-system,Attempt:0,} returns sandbox id \"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf\"" Apr 21 10:44:57.413393 containerd[1989]: time="2026-04-21T10:44:57.413350361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:44:57.540486 kernel: calico-node[4891]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:44:57.822201 kubelet[3505]: I0421 10:44:57.822044 3505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f2d34b-4c18-4d13-a400-a3baedae5fec" path="/var/lib/kubelet/pods/98f2d34b-4c18-4d13-a400-a3baedae5fec/volumes" Apr 21 10:44:58.359605 (udev-worker)[4897]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:44:58.370920 systemd-networkd[1621]: vxlan.calico: Link UP Apr 21 10:44:58.370932 systemd-networkd[1621]: vxlan.calico: Gained carrier Apr 21 10:44:59.027195 systemd-networkd[1621]: cali3515bba5897: Gained IPv6LL Apr 21 10:44:59.378753 containerd[1989]: time="2026-04-21T10:44:59.378305380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:44:59.390481 containerd[1989]: time="2026-04-21T10:44:59.390008876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.976597082s" Apr 21 10:44:59.390481 containerd[1989]: time="2026-04-21T10:44:59.390062139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:44:59.400994 containerd[1989]: time="2026-04-21T10:44:59.400871056Z" level=info msg="CreateContainer within sandbox \"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:44:59.406410 containerd[1989]: time="2026-04-21T10:44:59.406233641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:59.407062 containerd[1989]: time="2026-04-21T10:44:59.407023870Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:59.407751 containerd[1989]: time="2026-04-21T10:44:59.407706704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:44:59.426613 containerd[1989]: time="2026-04-21T10:44:59.426569587Z" level=info msg="CreateContainer within sandbox \"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b70728b6c3ac585615e585943d49ec71d9d8a6e70536a73a75086f82efa38f9f\"" Apr 21 10:44:59.427789 containerd[1989]: time="2026-04-21T10:44:59.427643458Z" level=info msg="StartContainer for \"b70728b6c3ac585615e585943d49ec71d9d8a6e70536a73a75086f82efa38f9f\"" Apr 21 10:44:59.551670 systemd[1]: Started cri-containerd-b70728b6c3ac585615e585943d49ec71d9d8a6e70536a73a75086f82efa38f9f.scope - libcontainer container b70728b6c3ac585615e585943d49ec71d9d8a6e70536a73a75086f82efa38f9f. Apr 21 10:44:59.610685 containerd[1989]: time="2026-04-21T10:44:59.610639100Z" level=info msg="StartContainer for \"b70728b6c3ac585615e585943d49ec71d9d8a6e70536a73a75086f82efa38f9f\" returns successfully" Apr 21 10:44:59.636476 containerd[1989]: time="2026-04-21T10:44:59.636234371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:45:00.175722 systemd-networkd[1621]: vxlan.calico: Gained IPv6LL Apr 21 10:45:02.011940 containerd[1989]: time="2026-04-21T10:45:02.010565596Z" level=info msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" Apr 21 10:45:02.233181 ntpd[1963]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Apr 21 10:45:02.233275 ntpd[1963]: Listen normally on 7 cali3515bba5897 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:45:02.234821 ntpd[1963]: 21 Apr 10:45:02 ntpd[1963]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Apr 21 10:45:02.234821 ntpd[1963]: 21 Apr 10:45:02 ntpd[1963]: Listen normally on 7 cali3515bba5897 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:45:02.234821 ntpd[1963]: 21 Apr 10:45:02 ntpd[1963]: Listen normally on 8 vxlan.calico [fe80::648d:94ff:fe82:4185%5]:123 Apr 21 10:45:02.233333 ntpd[1963]: Listen normally on 8 vxlan.calico [fe80::648d:94ff:fe82:4185%5]:123 Apr 21 10:45:02.804305 containerd[1989]: time="2026-04-21T10:45:02.803911395Z" level=info msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.406 [INFO][5125] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.407 [INFO][5125] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" iface="eth0" netns="/var/run/netns/cni-6b6edf89-cff7-8f33-852f-1f8678132932" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.408 [INFO][5125] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" iface="eth0" netns="/var/run/netns/cni-6b6edf89-cff7-8f33-852f-1f8678132932" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.408 [INFO][5125] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" iface="eth0" netns="/var/run/netns/cni-6b6edf89-cff7-8f33-852f-1f8678132932" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.408 [INFO][5125] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.408 [INFO][5125] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.960 [INFO][5132] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.960 [INFO][5132] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.960 [INFO][5132] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.986 [WARNING][5132] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.987 [INFO][5132] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.993 [INFO][5132] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:03.003216 containerd[1989]: 2026-04-21 10:45:02.998 [INFO][5125] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:03.012041 systemd[1]: run-netns-cni\x2d6b6edf89\x2dcff7\x2d8f33\x2d852f\x2d1f8678132932.mount: Deactivated successfully. Apr 21 10:45:03.025166 containerd[1989]: time="2026-04-21T10:45:03.025062881Z" level=info msg="TearDown network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" successfully" Apr 21 10:45:03.025774 containerd[1989]: time="2026-04-21T10:45:03.025487055Z" level=info msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" returns successfully" Apr 21 10:45:03.047360 containerd[1989]: time="2026-04-21T10:45:03.046940802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r56p8,Uid:81f2aec6-921e-4349-a810-22bcdec6b773,Namespace:kube-system,Attempt:1,}" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.989 [INFO][5146] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.989 [INFO][5146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" iface="eth0" netns="/var/run/netns/cni-e638a537-1f8a-eef6-5fae-38627b1e3760" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.989 [INFO][5146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" iface="eth0" netns="/var/run/netns/cni-e638a537-1f8a-eef6-5fae-38627b1e3760" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.990 [INFO][5146] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" iface="eth0" netns="/var/run/netns/cni-e638a537-1f8a-eef6-5fae-38627b1e3760" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.990 [INFO][5146] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:02.990 [INFO][5146] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.087 [INFO][5156] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.089 [INFO][5156] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.089 [INFO][5156] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.110 [WARNING][5156] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.110 [INFO][5156] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.115 [INFO][5156] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:03.121146 containerd[1989]: 2026-04-21 10:45:03.117 [INFO][5146] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:03.125314 containerd[1989]: time="2026-04-21T10:45:03.125037223Z" level=info msg="TearDown network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" successfully" Apr 21 10:45:03.125314 containerd[1989]: time="2026-04-21T10:45:03.125099976Z" level=info msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" returns successfully" Apr 21 10:45:03.128151 systemd[1]: run-netns-cni\x2de638a537\x2d1f8a\x2deef6\x2d5fae\x2d38627b1e3760.mount: Deactivated successfully. Apr 21 10:45:03.134141 containerd[1989]: time="2026-04-21T10:45:03.134100725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7b6bb87f-lgg8j,Uid:fcceb021-be6c-412d-b4fd-efcc9879606c,Namespace:calico-system,Attempt:1,}" Apr 21 10:45:03.307066 containerd[1989]: time="2026-04-21T10:45:03.306518347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:03.310921 containerd[1989]: time="2026-04-21T10:45:03.310860185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:45:03.314128 containerd[1989]: time="2026-04-21T10:45:03.314091895Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:03.320564 containerd[1989]: time="2026-04-21T10:45:03.320507887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:03.321257 containerd[1989]: time="2026-04-21T10:45:03.321217211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.684152256s" Apr 21 10:45:03.321396 containerd[1989]: time="2026-04-21T10:45:03.321376776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:45:03.331210 containerd[1989]: time="2026-04-21T10:45:03.331139009Z" level=info msg="CreateContainer within sandbox \"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:45:03.366541 containerd[1989]: time="2026-04-21T10:45:03.366402956Z" level=info msg="CreateContainer within sandbox \"31bc5c95ce10f9be370483edad33f84b79fc979bc06962e77f6b4a39178b8bbf\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d920a46ae7cf8702c7e5d82cb922918b68c6d55dfc1668ac4fdb0d7fafd43a4a\"" Apr 21 10:45:03.368719 containerd[1989]: time="2026-04-21T10:45:03.368585531Z" level=info msg="StartContainer for \"d920a46ae7cf8702c7e5d82cb922918b68c6d55dfc1668ac4fdb0d7fafd43a4a\"" Apr 21 10:45:03.410355 systemd-networkd[1621]: cali016f805a31a: Link UP Apr 21 10:45:03.412020 (udev-worker)[5220]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:45:03.412970 systemd-networkd[1621]: cali016f805a31a: Gained carrier Apr 21 10:45:03.424745 systemd[1]: Started cri-containerd-d920a46ae7cf8702c7e5d82cb922918b68c6d55dfc1668ac4fdb0d7fafd43a4a.scope - libcontainer container d920a46ae7cf8702c7e5d82cb922918b68c6d55dfc1668ac4fdb0d7fafd43a4a. Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.222 [INFO][5162] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0 coredns-66bc5c9577- kube-system 81f2aec6-921e-4349-a810-22bcdec6b773 980 0 2026-04-21 10:44:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-236 coredns-66bc5c9577-r56p8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali016f805a31a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.222 [INFO][5162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.293 [INFO][5183] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" HandleID="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.306 [INFO][5183] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" HandleID="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-236", "pod":"coredns-66bc5c9577-r56p8", "timestamp":"2026-04-21 10:45:03.293735533 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a91e0)} Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.307 [INFO][5183] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.307 [INFO][5183] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.307 [INFO][5183] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.311 [INFO][5183] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.332 [INFO][5183] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.346 [INFO][5183] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.357 [INFO][5183] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.361 [INFO][5183] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.361 [INFO][5183] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.365 [INFO][5183] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76 Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.378 [INFO][5183] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.394 [INFO][5183] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.66/26] block=192.168.120.64/26 handle="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.394 [INFO][5183] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.66/26] handle="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" host="ip-172-31-20-236" Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.394 [INFO][5183] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:03.474023 containerd[1989]: 2026-04-21 10:45:03.394 [INFO][5183] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.66/26] IPv6=[] ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" HandleID="k8s-pod-network.b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.400 [INFO][5162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f2aec6-921e-4349-a810-22bcdec6b773", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"coredns-66bc5c9577-r56p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f805a31a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.400 [INFO][5162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.66/32] ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.400 [INFO][5162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali016f805a31a ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.415 [INFO][5162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.421 [INFO][5162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f2aec6-921e-4349-a810-22bcdec6b773", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76", Pod:"coredns-66bc5c9577-r56p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f805a31a", MAC:"e6:50:14:b1:25:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:03.476509 containerd[1989]: 2026-04-21 10:45:03.464 [INFO][5162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76" Namespace="kube-system" Pod="coredns-66bc5c9577-r56p8" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:03.516811 containerd[1989]: time="2026-04-21T10:45:03.515900556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:03.516811 containerd[1989]: time="2026-04-21T10:45:03.515977169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:03.516811 containerd[1989]: time="2026-04-21T10:45:03.515998429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:03.516811 containerd[1989]: time="2026-04-21T10:45:03.516103262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:03.533243 systemd-networkd[1621]: cali11289c37af6: Link UP Apr 21 10:45:03.534765 systemd-networkd[1621]: cali11289c37af6: Gained carrier Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.275 [INFO][5173] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0 calico-kube-controllers-6d7b6bb87f- calico-system fcceb021-be6c-412d-b4fd-efcc9879606c 984 0 2026-04-21 10:44:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d7b6bb87f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-236 calico-kube-controllers-6d7b6bb87f-lgg8j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali11289c37af6 [] [] }} ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.275 [INFO][5173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.343 [INFO][5194] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" HandleID="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.353 [INFO][5194] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" HandleID="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"calico-kube-controllers-6d7b6bb87f-lgg8j", "timestamp":"2026-04-21 10:45:03.343553321 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000273600)} Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.353 [INFO][5194] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.394 [INFO][5194] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.395 [INFO][5194] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.418 [INFO][5194] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.441 [INFO][5194] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.463 [INFO][5194] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.471 [INFO][5194] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.477 [INFO][5194] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.478 [INFO][5194] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.481 [INFO][5194] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167 Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.495 [INFO][5194] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.525 [INFO][5194] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.67/26] block=192.168.120.64/26 handle="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.526 [INFO][5194] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.67/26] handle="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" host="ip-172-31-20-236" Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.526 [INFO][5194] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:03.568645 containerd[1989]: 2026-04-21 10:45:03.526 [INFO][5194] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.67/26] IPv6=[] ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" HandleID="k8s-pod-network.c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.529 [INFO][5173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0", GenerateName:"calico-kube-controllers-6d7b6bb87f-", Namespace:"calico-system", SelfLink:"", UID:"fcceb021-be6c-412d-b4fd-efcc9879606c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7b6bb87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"calico-kube-controllers-6d7b6bb87f-lgg8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11289c37af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.529 [INFO][5173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.67/32] ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.529 [INFO][5173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11289c37af6 ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.532 [INFO][5173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.532 [INFO][5173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0", GenerateName:"calico-kube-controllers-6d7b6bb87f-", Namespace:"calico-system", SelfLink:"", UID:"fcceb021-be6c-412d-b4fd-efcc9879606c", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7b6bb87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167", Pod:"calico-kube-controllers-6d7b6bb87f-lgg8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11289c37af6", MAC:"86:a9:c9:f1:08:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:03.570201 containerd[1989]: 2026-04-21 10:45:03.555 [INFO][5173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167" Namespace="calico-system" Pod="calico-kube-controllers-6d7b6bb87f-lgg8j" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:03.584874 containerd[1989]: time="2026-04-21T10:45:03.584693311Z" level=info msg="StartContainer for \"d920a46ae7cf8702c7e5d82cb922918b68c6d55dfc1668ac4fdb0d7fafd43a4a\" returns successfully" Apr 21 10:45:03.592344 systemd[1]: Started cri-containerd-b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76.scope - libcontainer container b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76. Apr 21 10:45:03.642570 containerd[1989]: time="2026-04-21T10:45:03.641884550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:03.642570 containerd[1989]: time="2026-04-21T10:45:03.641975965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:03.642570 containerd[1989]: time="2026-04-21T10:45:03.642002602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:03.643730 containerd[1989]: time="2026-04-21T10:45:03.642967063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:03.678861 systemd[1]: Started cri-containerd-c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167.scope - libcontainer container c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167. Apr 21 10:45:03.737549 containerd[1989]: time="2026-04-21T10:45:03.737493190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r56p8,Uid:81f2aec6-921e-4349-a810-22bcdec6b773,Namespace:kube-system,Attempt:1,} returns sandbox id \"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76\"" Apr 21 10:45:03.750359 containerd[1989]: time="2026-04-21T10:45:03.749871283Z" level=info msg="CreateContainer within sandbox \"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:45:03.789059 containerd[1989]: time="2026-04-21T10:45:03.788259056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7b6bb87f-lgg8j,Uid:fcceb021-be6c-412d-b4fd-efcc9879606c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167\"" Apr 21 10:45:03.793065 containerd[1989]: time="2026-04-21T10:45:03.793021779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:45:03.807038 containerd[1989]: time="2026-04-21T10:45:03.806414589Z" level=info msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.897 [INFO][5363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.899 [INFO][5363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" iface="eth0" netns="/var/run/netns/cni-36f6803e-868a-77af-b608-c65e167beb3a" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.899 [INFO][5363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" iface="eth0" netns="/var/run/netns/cni-36f6803e-868a-77af-b608-c65e167beb3a" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.900 [INFO][5363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" iface="eth0" netns="/var/run/netns/cni-36f6803e-868a-77af-b608-c65e167beb3a" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.900 [INFO][5363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.900 [INFO][5363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.929 [INFO][5379] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.929 [INFO][5379] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.929 [INFO][5379] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.935 [WARNING][5379] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.935 [INFO][5379] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.938 [INFO][5379] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:03.942574 containerd[1989]: 2026-04-21 10:45:03.940 [INFO][5363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:03.942574 containerd[1989]: time="2026-04-21T10:45:03.942411740Z" level=info msg="TearDown network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" successfully" Apr 21 10:45:03.942574 containerd[1989]: time="2026-04-21T10:45:03.942466543Z" level=info msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" returns successfully" Apr 21 10:45:03.947027 containerd[1989]: time="2026-04-21T10:45:03.946989354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7gjwc,Uid:439e9caa-c7e2-48c1-a515-3023dbf91270,Namespace:kube-system,Attempt:1,}" Apr 21 10:45:03.967870 containerd[1989]: time="2026-04-21T10:45:03.967815730Z" level=info msg="CreateContainer within sandbox \"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c259bd769793059e10ac571144663f8f7a1edf73a625779a3fe347dfcc833775\"" Apr 21 10:45:03.968589 containerd[1989]: time="2026-04-21T10:45:03.968542123Z" level=info msg="StartContainer for \"c259bd769793059e10ac571144663f8f7a1edf73a625779a3fe347dfcc833775\"" Apr 21 10:45:04.024144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885206902.mount: Deactivated successfully. Apr 21 10:45:04.024296 systemd[1]: run-netns-cni\x2d36f6803e\x2d868a\x2d77af\x2db608\x2dc65e167beb3a.mount: Deactivated successfully. Apr 21 10:45:04.033352 systemd[1]: Started cri-containerd-c259bd769793059e10ac571144663f8f7a1edf73a625779a3fe347dfcc833775.scope - libcontainer container c259bd769793059e10ac571144663f8f7a1edf73a625779a3fe347dfcc833775. Apr 21 10:45:04.085481 containerd[1989]: time="2026-04-21T10:45:04.085329166Z" level=info msg="StartContainer for \"c259bd769793059e10ac571144663f8f7a1edf73a625779a3fe347dfcc833775\" returns successfully" Apr 21 10:45:04.170239 systemd-networkd[1621]: cali702fd7589e6: Link UP Apr 21 10:45:04.171277 systemd-networkd[1621]: cali702fd7589e6: Gained carrier Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.054 [INFO][5393] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0 coredns-66bc5c9577- kube-system 439e9caa-c7e2-48c1-a515-3023dbf91270 999 0 2026-04-21 10:44:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-236 coredns-66bc5c9577-7gjwc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali702fd7589e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.055 [INFO][5393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.099 [INFO][5423] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" HandleID="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.107 [INFO][5423] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" HandleID="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-236", "pod":"coredns-66bc5c9577-7gjwc", "timestamp":"2026-04-21 10:45:04.099721254 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.108 [INFO][5423] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.108 [INFO][5423] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.108 [INFO][5423] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.113 [INFO][5423] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.119 [INFO][5423] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.125 [INFO][5423] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.130 [INFO][5423] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.133 [INFO][5423] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.134 [INFO][5423] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.138 [INFO][5423] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.151 [INFO][5423] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.161 [INFO][5423] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.68/26] block=192.168.120.64/26 handle="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.161 [INFO][5423] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.68/26] handle="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" host="ip-172-31-20-236" Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.161 [INFO][5423] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:04.199491 containerd[1989]: 2026-04-21 10:45:04.162 [INFO][5423] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.68/26] IPv6=[] ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" HandleID="k8s-pod-network.53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.165 [INFO][5393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"439e9caa-c7e2-48c1-a515-3023dbf91270", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"coredns-66bc5c9577-7gjwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali702fd7589e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.165 [INFO][5393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.68/32] ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.166 [INFO][5393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali702fd7589e6 ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.171 [INFO][5393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.172 [INFO][5393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"439e9caa-c7e2-48c1-a515-3023dbf91270", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f", Pod:"coredns-66bc5c9577-7gjwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali702fd7589e6", MAC:"2e:a9:5e:89:e9:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:04.203271 containerd[1989]: 2026-04-21 10:45:04.190 [INFO][5393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f" Namespace="kube-system" Pod="coredns-66bc5c9577-7gjwc" WorkloadEndpoint="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:04.265862 containerd[1989]: time="2026-04-21T10:45:04.265081746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:04.265862 containerd[1989]: time="2026-04-21T10:45:04.265781083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:04.265862 containerd[1989]: time="2026-04-21T10:45:04.265813455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:04.266588 kubelet[3505]: I0421 10:45:04.266464 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75cfc67b49-5vflf" podStartSLOduration=2.328274624 podStartE2EDuration="8.263917768s" podCreationTimestamp="2026-04-21 10:44:56 +0000 UTC" firstStartedPulling="2026-04-21 10:44:57.388025564 +0000 UTC m=+51.818050647" lastFinishedPulling="2026-04-21 10:45:03.3236687 +0000 UTC m=+57.753693791" observedRunningTime="2026-04-21 10:45:04.258593426 +0000 UTC m=+58.688618524" watchObservedRunningTime="2026-04-21 10:45:04.263917768 +0000 UTC m=+58.693942918" Apr 21 10:45:04.268884 containerd[1989]: time="2026-04-21T10:45:04.266988614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:04.318692 systemd[1]: Started cri-containerd-53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f.scope - libcontainer container 53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f. Apr 21 10:45:04.439644 containerd[1989]: time="2026-04-21T10:45:04.439548168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7gjwc,Uid:439e9caa-c7e2-48c1-a515-3023dbf91270,Namespace:kube-system,Attempt:1,} returns sandbox id \"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f\"" Apr 21 10:45:04.462755 containerd[1989]: time="2026-04-21T10:45:04.462613827Z" level=info msg="CreateContainer within sandbox \"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:45:04.495408 containerd[1989]: time="2026-04-21T10:45:04.494243759Z" level=info msg="CreateContainer within sandbox \"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69e66ec11bbe3e0232e6d88bdc1ef97b4d0e52217f6cdd25a9a69ff00a983354\"" Apr 21 10:45:04.500204 containerd[1989]: time="2026-04-21T10:45:04.497636178Z" level=info msg="StartContainer for \"69e66ec11bbe3e0232e6d88bdc1ef97b4d0e52217f6cdd25a9a69ff00a983354\"" Apr 21 10:45:04.546691 systemd[1]: Started cri-containerd-69e66ec11bbe3e0232e6d88bdc1ef97b4d0e52217f6cdd25a9a69ff00a983354.scope - libcontainer container 69e66ec11bbe3e0232e6d88bdc1ef97b4d0e52217f6cdd25a9a69ff00a983354. Apr 21 10:45:04.585654 containerd[1989]: time="2026-04-21T10:45:04.585594294Z" level=info msg="StartContainer for \"69e66ec11bbe3e0232e6d88bdc1ef97b4d0e52217f6cdd25a9a69ff00a983354\" returns successfully" Apr 21 10:45:04.803013 containerd[1989]: time="2026-04-21T10:45:04.802586028Z" level=info msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" Apr 21 10:45:04.803527 containerd[1989]: time="2026-04-21T10:45:04.803498960Z" level=info msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" Apr 21 10:45:04.900934 kubelet[3505]: I0421 10:45:04.900234 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r56p8" podStartSLOduration=54.900208993 podStartE2EDuration="54.900208993s" podCreationTimestamp="2026-04-21 10:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:04.326863017 +0000 UTC m=+58.756888120" watchObservedRunningTime="2026-04-21 10:45:04.900208993 +0000 UTC m=+59.330234090" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.897 [INFO][5560] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.898 [INFO][5560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" iface="eth0" netns="/var/run/netns/cni-f2ec9f98-4d2b-d3bf-a72b-525cd973b8ff" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.898 [INFO][5560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" iface="eth0" netns="/var/run/netns/cni-f2ec9f98-4d2b-d3bf-a72b-525cd973b8ff" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.898 [INFO][5560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" iface="eth0" netns="/var/run/netns/cni-f2ec9f98-4d2b-d3bf-a72b-525cd973b8ff" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.899 [INFO][5560] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.899 [INFO][5560] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.947 [INFO][5574] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.947 [INFO][5574] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.948 [INFO][5574] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.958 [WARNING][5574] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.959 [INFO][5574] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.962 [INFO][5574] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:04.967879 containerd[1989]: 2026-04-21 10:45:04.965 [INFO][5560] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:04.968547 containerd[1989]: time="2026-04-21T10:45:04.968023126Z" level=info msg="TearDown network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" successfully" Apr 21 10:45:04.968547 containerd[1989]: time="2026-04-21T10:45:04.968052809Z" level=info msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" returns successfully" Apr 21 10:45:04.973108 containerd[1989]: time="2026-04-21T10:45:04.973056277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-fvqgg,Uid:d5194f4a-5b68-43fb-8b6d-2794530d8be1,Namespace:calico-system,Attempt:1,}" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.906 [INFO][5559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.906 [INFO][5559] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" iface="eth0" netns="/var/run/netns/cni-90fcaa33-8a22-29d8-84ba-613f22830b79" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.907 [INFO][5559] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" iface="eth0" netns="/var/run/netns/cni-90fcaa33-8a22-29d8-84ba-613f22830b79" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.908 [INFO][5559] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" iface="eth0" netns="/var/run/netns/cni-90fcaa33-8a22-29d8-84ba-613f22830b79" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.908 [INFO][5559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.909 [INFO][5559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.947 [INFO][5578] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.948 [INFO][5578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.962 [INFO][5578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.974 [WARNING][5578] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.975 [INFO][5578] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.977 [INFO][5578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:04.982112 containerd[1989]: 2026-04-21 10:45:04.979 [INFO][5559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:04.982736 containerd[1989]: time="2026-04-21T10:45:04.982325779Z" level=info msg="TearDown network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" successfully" Apr 21 10:45:04.982736 containerd[1989]: time="2026-04-21T10:45:04.982358574Z" level=info msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" returns successfully" Apr 21 10:45:04.990995 containerd[1989]: time="2026-04-21T10:45:04.990952656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-68cq7,Uid:fbe93840-23f5-4bbe-b319-4df10f6383eb,Namespace:calico-system,Attempt:1,}" Apr 21 10:45:05.013584 systemd[1]: run-netns-cni\x2d90fcaa33\x2d8a22\x2d29d8\x2d84ba\x2d613f22830b79.mount: Deactivated successfully. Apr 21 10:45:05.013696 systemd[1]: run-netns-cni\x2df2ec9f98\x2d4d2b\x2dd3bf\x2da72b\x2d525cd973b8ff.mount: Deactivated successfully. Apr 21 10:45:05.214199 systemd-networkd[1621]: cali1c87931ceda: Link UP Apr 21 10:45:05.216371 systemd-networkd[1621]: cali1c87931ceda: Gained carrier Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.097 [INFO][5587] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0 calico-apiserver-747d6cc58b- calico-system d5194f4a-5b68-43fb-8b6d-2794530d8be1 1021 0 2026-04-21 10:44:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d6cc58b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-236 calico-apiserver-747d6cc58b-fvqgg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1c87931ceda [] [] }} ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.097 [INFO][5587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.149 [INFO][5613] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" HandleID="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.164 [INFO][5613] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" HandleID="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"calico-apiserver-747d6cc58b-fvqgg", "timestamp":"2026-04-21 10:45:05.149747036 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f74a0)} Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.164 [INFO][5613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.164 [INFO][5613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.164 [INFO][5613] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.169 [INFO][5613] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.176 [INFO][5613] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.181 [INFO][5613] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.184 [INFO][5613] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.187 [INFO][5613] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.187 [INFO][5613] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.189 [INFO][5613] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038 Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.198 [INFO][5613] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.206 [INFO][5613] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.69/26] block=192.168.120.64/26 handle="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.206 [INFO][5613] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.69/26] handle="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" host="ip-172-31-20-236" Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.206 [INFO][5613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:05.240483 containerd[1989]: 2026-04-21 10:45:05.206 [INFO][5613] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.69/26] IPv6=[] ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" HandleID="k8s-pod-network.261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.210 [INFO][5587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"d5194f4a-5b68-43fb-8b6d-2794530d8be1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"calico-apiserver-747d6cc58b-fvqgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c87931ceda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.210 [INFO][5587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.69/32] ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.210 [INFO][5587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c87931ceda ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.217 [INFO][5587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.217 [INFO][5587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"d5194f4a-5b68-43fb-8b6d-2794530d8be1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038", Pod:"calico-apiserver-747d6cc58b-fvqgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c87931ceda", MAC:"72:48:77:96:55:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:05.242834 containerd[1989]: 2026-04-21 10:45:05.234 [INFO][5587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-fvqgg" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:05.281908 kubelet[3505]: I0421 10:45:05.281608 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7gjwc" podStartSLOduration=55.281573292 podStartE2EDuration="55.281573292s" podCreationTimestamp="2026-04-21 10:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:45:05.280791047 +0000 UTC m=+59.710816168" watchObservedRunningTime="2026-04-21 10:45:05.281573292 +0000 UTC m=+59.711598392" Apr 21 10:45:05.308431 containerd[1989]: time="2026-04-21T10:45:05.307810684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:05.308870 containerd[1989]: time="2026-04-21T10:45:05.308787128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:05.308870 containerd[1989]: time="2026-04-21T10:45:05.308846973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:05.312923 containerd[1989]: time="2026-04-21T10:45:05.312686871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:05.395673 systemd[1]: Started cri-containerd-261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038.scope - libcontainer container 261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038. Apr 21 10:45:05.424873 systemd-networkd[1621]: cali016f805a31a: Gained IPv6LL Apr 21 10:45:05.491635 systemd-networkd[1621]: cali702fd7589e6: Gained IPv6LL Apr 21 10:45:05.514905 systemd-networkd[1621]: caliadd5b5b79e9: Link UP Apr 21 10:45:05.517108 systemd-networkd[1621]: caliadd5b5b79e9: Gained carrier Apr 21 10:45:05.551694 systemd-networkd[1621]: cali11289c37af6: Gained IPv6LL Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.096 [INFO][5593] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0 csi-node-driver- calico-system fbe93840-23f5-4bbe-b319-4df10f6383eb 1022 0 2026-04-21 10:44:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-236 csi-node-driver-68cq7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliadd5b5b79e9 [] [] }} ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.097 [INFO][5593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.156 [INFO][5614] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" HandleID="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.169 [INFO][5614] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" HandleID="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"csi-node-driver-68cq7", "timestamp":"2026-04-21 10:45:05.156599375 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115080)} Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.169 [INFO][5614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.206 [INFO][5614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.208 [INFO][5614] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.273 [INFO][5614] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.308 [INFO][5614] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.337 [INFO][5614] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.395 [INFO][5614] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.421 [INFO][5614] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.422 [INFO][5614] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.435 [INFO][5614] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6 Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.450 [INFO][5614] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.477 [INFO][5614] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.70/26] block=192.168.120.64/26 handle="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.477 [INFO][5614] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.70/26] handle="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" host="ip-172-31-20-236" Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.477 [INFO][5614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:05.566855 containerd[1989]: 2026-04-21 10:45:05.478 [INFO][5614] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.70/26] IPv6=[] ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" HandleID="k8s-pod-network.09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.485 [INFO][5593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbe93840-23f5-4bbe-b319-4df10f6383eb", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"csi-node-driver-68cq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd5b5b79e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.486 [INFO][5593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.70/32] ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.486 [INFO][5593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadd5b5b79e9 ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.521 [INFO][5593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.526 [INFO][5593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbe93840-23f5-4bbe-b319-4df10f6383eb", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6", Pod:"csi-node-driver-68cq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd5b5b79e9", MAC:"56:12:fb:97:fa:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:05.568408 containerd[1989]: 2026-04-21 10:45:05.562 [INFO][5593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6" Namespace="calico-system" Pod="csi-node-driver-68cq7" WorkloadEndpoint="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:05.689134 containerd[1989]: time="2026-04-21T10:45:05.688829332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:05.689134 containerd[1989]: time="2026-04-21T10:45:05.688904138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:05.689134 containerd[1989]: time="2026-04-21T10:45:05.688920634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:05.689690 containerd[1989]: time="2026-04-21T10:45:05.689499013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:05.699942 containerd[1989]: time="2026-04-21T10:45:05.699765123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-fvqgg,Uid:d5194f4a-5b68-43fb-8b6d-2794530d8be1,Namespace:calico-system,Attempt:1,} returns sandbox id \"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038\"" Apr 21 10:45:05.734685 systemd[1]: Started cri-containerd-09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6.scope - libcontainer container 09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6. Apr 21 10:45:05.910290 containerd[1989]: time="2026-04-21T10:45:05.910236345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-68cq7,Uid:fbe93840-23f5-4bbe-b319-4df10f6383eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6\"" Apr 21 10:45:05.986466 containerd[1989]: time="2026-04-21T10:45:05.979940537Z" level=info msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.123 [WARNING][5750] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.123 [INFO][5750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.124 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" iface="eth0" netns="" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.124 [INFO][5750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.124 [INFO][5750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.209 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.210 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.211 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.224 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.224 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.227 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:06.238321 containerd[1989]: 2026-04-21 10:45:06.232 [INFO][5750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.239120 containerd[1989]: time="2026-04-21T10:45:06.238967736Z" level=info msg="TearDown network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" successfully" Apr 21 10:45:06.239120 containerd[1989]: time="2026-04-21T10:45:06.239006715Z" level=info msg="StopPodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" returns successfully" Apr 21 10:45:06.241124 containerd[1989]: time="2026-04-21T10:45:06.241066189Z" level=info msg="RemovePodSandbox for \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" Apr 21 10:45:06.241124 containerd[1989]: time="2026-04-21T10:45:06.241116972Z" level=info msg="Forcibly stopping sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\"" Apr 21 10:45:06.388115 systemd-networkd[1621]: cali1c87931ceda: Gained IPv6LL Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.318 [WARNING][5775] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" WorkloadEndpoint="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.318 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.318 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" iface="eth0" netns="" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.318 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.318 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.365 [INFO][5782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.365 [INFO][5782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.365 [INFO][5782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.376 [WARNING][5782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.376 [INFO][5782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" HandleID="k8s-pod-network.8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Workload="ip--172--31--20--236-k8s-whisker--6f7d6885f6--d56b5-eth0" Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.378 [INFO][5782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:06.399981 containerd[1989]: 2026-04-21 10:45:06.382 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a" Apr 21 10:45:06.400711 containerd[1989]: time="2026-04-21T10:45:06.400032608Z" level=info msg="TearDown network for sandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" successfully" Apr 21 10:45:06.456138 containerd[1989]: time="2026-04-21T10:45:06.455798943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:06.456138 containerd[1989]: time="2026-04-21T10:45:06.455975776Z" level=info msg="RemovePodSandbox \"8713757d80ddfa759f73fa48e2f5c5dec41f4a6ee807f4406205f9dbbe8f3b3a\" returns successfully" Apr 21 10:45:06.470080 containerd[1989]: time="2026-04-21T10:45:06.470041885Z" level=info msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.549 [WARNING][5801] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0", GenerateName:"calico-kube-controllers-6d7b6bb87f-", Namespace:"calico-system", SelfLink:"", UID:"fcceb021-be6c-412d-b4fd-efcc9879606c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7b6bb87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167", Pod:"calico-kube-controllers-6d7b6bb87f-lgg8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11289c37af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.550 [INFO][5801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.550 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" iface="eth0" netns="" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.550 [INFO][5801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.550 [INFO][5801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.619 [INFO][5808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.620 [INFO][5808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.620 [INFO][5808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.630 [WARNING][5808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.630 [INFO][5808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.633 [INFO][5808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:06.641930 containerd[1989]: 2026-04-21 10:45:06.636 [INFO][5801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.643168 containerd[1989]: time="2026-04-21T10:45:06.642081934Z" level=info msg="TearDown network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" successfully" Apr 21 10:45:06.643168 containerd[1989]: time="2026-04-21T10:45:06.642114160Z" level=info msg="StopPodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" returns successfully" Apr 21 10:45:06.643861 containerd[1989]: time="2026-04-21T10:45:06.643587703Z" level=info msg="RemovePodSandbox for \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" Apr 21 10:45:06.643861 containerd[1989]: time="2026-04-21T10:45:06.643626543Z" level=info msg="Forcibly stopping sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\"" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.718 [WARNING][5822] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0", GenerateName:"calico-kube-controllers-6d7b6bb87f-", Namespace:"calico-system", SelfLink:"", UID:"fcceb021-be6c-412d-b4fd-efcc9879606c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7b6bb87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167", Pod:"calico-kube-controllers-6d7b6bb87f-lgg8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11289c37af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.719 [INFO][5822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.719 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" iface="eth0" netns="" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.719 [INFO][5822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.719 [INFO][5822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.780 [INFO][5830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.780 [INFO][5830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.781 [INFO][5830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.794 [WARNING][5830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.794 [INFO][5830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" HandleID="k8s-pod-network.e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Workload="ip--172--31--20--236-k8s-calico--kube--controllers--6d7b6bb87f--lgg8j-eth0" Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.797 [INFO][5830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:06.803256 containerd[1989]: 2026-04-21 10:45:06.799 [INFO][5822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41" Apr 21 10:45:06.804680 containerd[1989]: time="2026-04-21T10:45:06.804638290Z" level=info msg="TearDown network for sandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" successfully" Apr 21 10:45:06.805574 containerd[1989]: time="2026-04-21T10:45:06.805424849Z" level=info msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" Apr 21 10:45:06.805885 containerd[1989]: time="2026-04-21T10:45:06.805860109Z" level=info msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" Apr 21 10:45:06.835945 containerd[1989]: time="2026-04-21T10:45:06.835894839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:06.836110 containerd[1989]: time="2026-04-21T10:45:06.836077670Z" level=info msg="RemovePodSandbox \"e5c6828d3c87bd9f618bf6b8ebe31a283c0cfcc4c58f4fddfd0c20f01b249c41\" returns successfully" Apr 21 10:45:06.837499 containerd[1989]: time="2026-04-21T10:45:06.837249518Z" level=info msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.005 [INFO][5856] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.007 [INFO][5856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" iface="eth0" netns="/var/run/netns/cni-040bf4af-2292-fa88-bd1f-12897219cb45" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.009 [INFO][5856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" iface="eth0" netns="/var/run/netns/cni-040bf4af-2292-fa88-bd1f-12897219cb45" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.012 [INFO][5856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" iface="eth0" netns="/var/run/netns/cni-040bf4af-2292-fa88-bd1f-12897219cb45" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.014 [INFO][5856] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.014 [INFO][5856] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.095 [INFO][5890] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.100 [INFO][5890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.100 [INFO][5890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.120 [WARNING][5890] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.120 [INFO][5890] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.129 [INFO][5890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.142500 containerd[1989]: 2026-04-21 10:45:07.136 [INFO][5856] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:45:07.147341 systemd[1]: run-netns-cni\x2d040bf4af\x2d2292\x2dfa88\x2dbd1f\x2d12897219cb45.mount: Deactivated successfully. Apr 21 10:45:07.149749 containerd[1989]: time="2026-04-21T10:45:07.148604946Z" level=info msg="TearDown network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" successfully" Apr 21 10:45:07.149749 containerd[1989]: time="2026-04-21T10:45:07.148649314Z" level=info msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" returns successfully" Apr 21 10:45:07.156020 containerd[1989]: time="2026-04-21T10:45:07.155867319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-gmpsx,Uid:b96b65f2-d7e3-4f8e-880f-b3f8c756fb62,Namespace:calico-system,Attempt:1,}" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.991 [INFO][5857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.993 [INFO][5857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" iface="eth0" netns="/var/run/netns/cni-c5a7e639-0e31-5d86-15a6-3a0231f24482" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.993 [INFO][5857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" iface="eth0" netns="/var/run/netns/cni-c5a7e639-0e31-5d86-15a6-3a0231f24482" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.995 [INFO][5857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" iface="eth0" netns="/var/run/netns/cni-c5a7e639-0e31-5d86-15a6-3a0231f24482" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.995 [INFO][5857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:06.995 [INFO][5857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.135 [INFO][5885] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.135 [INFO][5885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.135 [INFO][5885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.160 [WARNING][5885] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.160 [INFO][5885] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.163 [INFO][5885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.180173 containerd[1989]: 2026-04-21 10:45:07.173 [INFO][5857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:45:07.181717 containerd[1989]: time="2026-04-21T10:45:07.181059249Z" level=info msg="TearDown network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" successfully" Apr 21 10:45:07.181717 containerd[1989]: time="2026-04-21T10:45:07.181095448Z" level=info msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" returns successfully" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.031 [WARNING][5873] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"439e9caa-c7e2-48c1-a515-3023dbf91270", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f", Pod:"coredns-66bc5c9577-7gjwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali702fd7589e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.036 [INFO][5873] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.036 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" iface="eth0" netns="" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.036 [INFO][5873] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.037 [INFO][5873] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.154 [INFO][5895] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.156 [INFO][5895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.163 [INFO][5895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.174 [WARNING][5895] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.175 [INFO][5895] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.180 [INFO][5895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.188157 containerd[1989]: 2026-04-21 10:45:07.184 [INFO][5873] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.193865 systemd[1]: run-netns-cni\x2dc5a7e639\x2d0e31\x2d5d86\x2d15a6\x2d3a0231f24482.mount: Deactivated successfully. Apr 21 10:45:07.203588 containerd[1989]: time="2026-04-21T10:45:07.203524188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-rdqmf,Uid:68a2897e-5688-4d69-aa35-2e241e661e25,Namespace:calico-system,Attempt:1,}" Apr 21 10:45:07.215886 systemd-networkd[1621]: caliadd5b5b79e9: Gained IPv6LL Apr 21 10:45:07.230531 containerd[1989]: time="2026-04-21T10:45:07.230431412Z" level=info msg="TearDown network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" successfully" Apr 21 10:45:07.230714 containerd[1989]: time="2026-04-21T10:45:07.230694554Z" level=info msg="StopPodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" returns successfully" Apr 21 10:45:07.233381 containerd[1989]: time="2026-04-21T10:45:07.233349387Z" level=info msg="RemovePodSandbox for \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" Apr 21 10:45:07.233571 containerd[1989]: time="2026-04-21T10:45:07.233550425Z" level=info msg="Forcibly stopping sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\"" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.412 [WARNING][5938] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"439e9caa-c7e2-48c1-a515-3023dbf91270", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"53c58e6029e6ac62783252bd2a5c261d9ff81a4ef54375720e3d6150df17596f", Pod:"coredns-66bc5c9577-7gjwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali702fd7589e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.413 [INFO][5938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.413 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" iface="eth0" netns="" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.413 [INFO][5938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.413 [INFO][5938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.486 [INFO][5954] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.487 [INFO][5954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.487 [INFO][5954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.509 [WARNING][5954] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.510 [INFO][5954] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" HandleID="k8s-pod-network.8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--7gjwc-eth0" Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.515 [INFO][5954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.532028 containerd[1989]: 2026-04-21 10:45:07.526 [INFO][5938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41" Apr 21 10:45:07.533237 containerd[1989]: time="2026-04-21T10:45:07.533197834Z" level=info msg="TearDown network for sandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" successfully" Apr 21 10:45:07.545795 containerd[1989]: time="2026-04-21T10:45:07.545748271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:07.545935 containerd[1989]: time="2026-04-21T10:45:07.545848925Z" level=info msg="RemovePodSandbox \"8cade175f3b8beb8b9673564456c036e182ce576e70e9f380dd01f84c06b0d41\" returns successfully" Apr 21 10:45:07.546805 containerd[1989]: time="2026-04-21T10:45:07.546774028Z" level=info msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" Apr 21 10:45:07.623616 systemd-networkd[1621]: calicb0f459ab54: Link UP Apr 21 10:45:07.625209 systemd-networkd[1621]: calicb0f459ab54: Gained carrier Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.378 [INFO][5905] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0 goldmane-cccfbd5cf- calico-system b96b65f2-d7e3-4f8e-880f-b3f8c756fb62 1051 0 2026-04-21 10:44:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-236 goldmane-cccfbd5cf-gmpsx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicb0f459ab54 [] [] }} ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.378 [INFO][5905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.514 [INFO][5949] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" HandleID="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.542 [INFO][5949] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" HandleID="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102010), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"goldmane-cccfbd5cf-gmpsx", "timestamp":"2026-04-21 10:45:07.514522903 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00018a6e0)} Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.542 [INFO][5949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.542 [INFO][5949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.542 [INFO][5949] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.546 [INFO][5949] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.561 [INFO][5949] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.569 [INFO][5949] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.573 [INFO][5949] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.578 [INFO][5949] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.578 [INFO][5949] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.584 [INFO][5949] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.595 [INFO][5949] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.610 [INFO][5949] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.71/26] block=192.168.120.64/26 handle="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.610 [INFO][5949] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.71/26] handle="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" host="ip-172-31-20-236" Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.611 [INFO][5949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.670785 containerd[1989]: 2026-04-21 10:45:07.611 [INFO][5949] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.71/26] IPv6=[] ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" HandleID="k8s-pod-network.979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.619 [INFO][5905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"goldmane-cccfbd5cf-gmpsx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicb0f459ab54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.620 [INFO][5905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.71/32] ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.620 [INFO][5905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb0f459ab54 ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.625 [INFO][5905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.626 [INFO][5905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a", Pod:"goldmane-cccfbd5cf-gmpsx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicb0f459ab54", MAC:"12:d0:f5:71:c0:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.672020 containerd[1989]: 2026-04-21 10:45:07.656 [INFO][5905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-gmpsx" WorkloadEndpoint="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:45:07.839580 systemd-networkd[1621]: calia534552e5a3: Link UP Apr 21 10:45:07.841511 systemd-networkd[1621]: calia534552e5a3: Gained carrier Apr 21 10:45:07.882088 containerd[1989]: time="2026-04-21T10:45:07.873053531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:07.882088 containerd[1989]: time="2026-04-21T10:45:07.873135332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:07.882088 containerd[1989]: time="2026-04-21T10:45:07.873160218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:07.882088 containerd[1989]: time="2026-04-21T10:45:07.873294938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.409 [INFO][5917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0 calico-apiserver-747d6cc58b- calico-system 68a2897e-5688-4d69-aa35-2e241e661e25 1050 0 2026-04-21 10:44:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747d6cc58b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-236 calico-apiserver-747d6cc58b-rdqmf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia534552e5a3 [] [] }} ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.410 [INFO][5917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.549 [INFO][5956] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" HandleID="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.563 [INFO][5956] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" HandleID="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-236", "pod":"calico-apiserver-747d6cc58b-rdqmf", "timestamp":"2026-04-21 10:45:07.549319607 +0000 UTC"}, Hostname:"ip-172-31-20-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b26e0)} Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.563 [INFO][5956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.610 [INFO][5956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.610 [INFO][5956] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-236' Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.650 [INFO][5956] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.675 [INFO][5956] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.703 [INFO][5956] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.719 [INFO][5956] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.750 [INFO][5956] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.755 [INFO][5956] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.768 [INFO][5956] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.797 [INFO][5956] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.822 [INFO][5956] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.72/26] block=192.168.120.64/26 handle="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.822 [INFO][5956] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.72/26] handle="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" host="ip-172-31-20-236" Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.822 [INFO][5956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:07.948148 containerd[1989]: 2026-04-21 10:45:07.822 [INFO][5956] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.72/26] IPv6=[] ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" HandleID="k8s-pod-network.e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.826 [INFO][5917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"68a2897e-5688-4d69-aa35-2e241e661e25", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"", Pod:"calico-apiserver-747d6cc58b-rdqmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia534552e5a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.826 [INFO][5917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.72/32] ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.826 [INFO][5917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia534552e5a3 ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.877 [INFO][5917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.888 [INFO][5917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"68a2897e-5688-4d69-aa35-2e241e661e25", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d", Pod:"calico-apiserver-747d6cc58b-rdqmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia534552e5a3", MAC:"82:f3:6a:dd:82:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:07.949647 containerd[1989]: 2026-04-21 10:45:07.914 [INFO][5917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d" Namespace="calico-system" Pod="calico-apiserver-747d6cc58b-rdqmf" WorkloadEndpoint="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:45:07.983648 systemd[1]: Started cri-containerd-979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a.scope - libcontainer container 979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a. Apr 21 10:45:08.090212 containerd[1989]: time="2026-04-21T10:45:08.090097324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:45:08.093903 containerd[1989]: time="2026-04-21T10:45:08.093507124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:45:08.093903 containerd[1989]: time="2026-04-21T10:45:08.093541394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:08.094805 containerd[1989]: time="2026-04-21T10:45:08.093722408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:07.786 [WARNING][5978] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"d5194f4a-5b68-43fb-8b6d-2794530d8be1", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038", Pod:"calico-apiserver-747d6cc58b-fvqgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c87931ceda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:07.786 [INFO][5978] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:07.786 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" iface="eth0" netns="" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:07.787 [INFO][5978] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:07.787 [INFO][5978] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.040 [INFO][6006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.045 [INFO][6006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.045 [INFO][6006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.071 [WARNING][6006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.072 [INFO][6006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.080 [INFO][6006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:08.098882 containerd[1989]: 2026-04-21 10:45:08.085 [INFO][5978] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.100573 containerd[1989]: time="2026-04-21T10:45:08.100527865Z" level=info msg="TearDown network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" successfully" Apr 21 10:45:08.100573 containerd[1989]: time="2026-04-21T10:45:08.100569516Z" level=info msg="StopPodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" returns successfully" Apr 21 10:45:08.102145 containerd[1989]: time="2026-04-21T10:45:08.102078604Z" level=info msg="RemovePodSandbox for \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" Apr 21 10:45:08.102145 containerd[1989]: time="2026-04-21T10:45:08.102125383Z" level=info msg="Forcibly stopping sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\"" Apr 21 10:45:08.181855 systemd[1]: Started cri-containerd-e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d.scope - libcontainer container e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d. Apr 21 10:45:08.223050 containerd[1989]: time="2026-04-21T10:45:08.222642481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-gmpsx,Uid:b96b65f2-d7e3-4f8e-880f-b3f8c756fb62,Namespace:calico-system,Attempt:1,} returns sandbox id \"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a\"" Apr 21 10:45:08.312584 containerd[1989]: time="2026-04-21T10:45:08.312505802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747d6cc58b-rdqmf,Uid:68a2897e-5688-4d69-aa35-2e241e661e25,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d\"" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.257 [WARNING][6090] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"d5194f4a-5b68-43fb-8b6d-2794530d8be1", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038", Pod:"calico-apiserver-747d6cc58b-fvqgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c87931ceda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.257 [INFO][6090] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.257 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" iface="eth0" netns="" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.257 [INFO][6090] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.257 [INFO][6090] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.350 [INFO][6114] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.352 [INFO][6114] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.352 [INFO][6114] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.361 [WARNING][6114] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.362 [INFO][6114] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" HandleID="k8s-pod-network.ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--fvqgg-eth0" Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.366 [INFO][6114] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:08.378644 containerd[1989]: 2026-04-21 10:45:08.371 [INFO][6090] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c" Apr 21 10:45:08.382787 containerd[1989]: time="2026-04-21T10:45:08.381725306Z" level=info msg="TearDown network for sandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" successfully" Apr 21 10:45:08.401489 containerd[1989]: time="2026-04-21T10:45:08.401410064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:08.401627 containerd[1989]: time="2026-04-21T10:45:08.401519952Z" level=info msg="RemovePodSandbox \"ffdb64499ccb62c109462c485ccfff652b4e575fdb2c456000b6aab3c0e90b7c\" returns successfully" Apr 21 10:45:08.402549 containerd[1989]: time="2026-04-21T10:45:08.402140737Z" level=info msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.479 [WARNING][6134] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbe93840-23f5-4bbe-b319-4df10f6383eb", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6", Pod:"csi-node-driver-68cq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd5b5b79e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.480 [INFO][6134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.480 [INFO][6134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" iface="eth0" netns="" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.480 [INFO][6134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.480 [INFO][6134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.537 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.537 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.537 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.558 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.558 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.561 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:08.577867 containerd[1989]: 2026-04-21 10:45:08.572 [INFO][6134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.580833 containerd[1989]: time="2026-04-21T10:45:08.577912211Z" level=info msg="TearDown network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" successfully" Apr 21 10:45:08.580833 containerd[1989]: time="2026-04-21T10:45:08.577940679Z" level=info msg="StopPodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" returns successfully" Apr 21 10:45:08.580833 containerd[1989]: time="2026-04-21T10:45:08.579197660Z" level=info msg="RemovePodSandbox for \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" Apr 21 10:45:08.580833 containerd[1989]: time="2026-04-21T10:45:08.579235597Z" level=info msg="Forcibly stopping sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\"" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.694 [WARNING][6157] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbe93840-23f5-4bbe-b319-4df10f6383eb", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6", Pod:"csi-node-driver-68cq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd5b5b79e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.695 [INFO][6157] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.695 [INFO][6157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" iface="eth0" netns="" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.695 [INFO][6157] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.695 [INFO][6157] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.754 [INFO][6164] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.754 [INFO][6164] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.756 [INFO][6164] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.767 [WARNING][6164] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.767 [INFO][6164] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" HandleID="k8s-pod-network.aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Workload="ip--172--31--20--236-k8s-csi--node--driver--68cq7-eth0" Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.770 [INFO][6164] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:08.777742 containerd[1989]: 2026-04-21 10:45:08.774 [INFO][6157] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d" Apr 21 10:45:08.779051 containerd[1989]: time="2026-04-21T10:45:08.777788693Z" level=info msg="TearDown network for sandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" successfully" Apr 21 10:45:08.783336 containerd[1989]: time="2026-04-21T10:45:08.783291784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:08.783477 containerd[1989]: time="2026-04-21T10:45:08.783379903Z" level=info msg="RemovePodSandbox \"aafbb4d526dbccf79f141d55af4a41103e17d580cf951149f54862e416eea19d\" returns successfully" Apr 21 10:45:08.784050 containerd[1989]: time="2026-04-21T10:45:08.784022770Z" level=info msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.870 [WARNING][6179] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f2aec6-921e-4349-a810-22bcdec6b773", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76", Pod:"coredns-66bc5c9577-r56p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f805a31a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.871 [INFO][6179] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.871 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" iface="eth0" netns="" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.871 [INFO][6179] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.871 [INFO][6179] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.976 [INFO][6186] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.976 [INFO][6186] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.976 [INFO][6186] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.986 [WARNING][6186] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.986 [INFO][6186] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.990 [INFO][6186] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:09.002407 containerd[1989]: 2026-04-21 10:45:08.997 [INFO][6179] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.002407 containerd[1989]: time="2026-04-21T10:45:09.002314303Z" level=info msg="TearDown network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" successfully" Apr 21 10:45:09.002407 containerd[1989]: time="2026-04-21T10:45:09.002346682Z" level=info msg="StopPodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" returns successfully" Apr 21 10:45:09.004967 containerd[1989]: time="2026-04-21T10:45:09.004459545Z" level=info msg="RemovePodSandbox for \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" Apr 21 10:45:09.004967 containerd[1989]: time="2026-04-21T10:45:09.004502431Z" level=info msg="Forcibly stopping sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\"" Apr 21 10:45:09.010659 systemd-networkd[1621]: calia534552e5a3: Gained IPv6LL Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.121 [WARNING][6213] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"81f2aec6-921e-4349-a810-22bcdec6b773", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"b9002b2ad4c61d3f2397fd32c07682056cf4e0a5585bd0c0a279040e03a5dd76", Pod:"coredns-66bc5c9577-r56p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali016f805a31a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.123 [INFO][6213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.123 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" iface="eth0" netns="" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.123 [INFO][6213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.123 [INFO][6213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.182 [INFO][6221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.183 [INFO][6221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.183 [INFO][6221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.197 [WARNING][6221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.197 [INFO][6221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" HandleID="k8s-pod-network.e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Workload="ip--172--31--20--236-k8s-coredns--66bc5c9577--r56p8-eth0" Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.201 [INFO][6221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:45:09.208614 containerd[1989]: 2026-04-21 10:45:09.205 [INFO][6213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0" Apr 21 10:45:09.208614 containerd[1989]: time="2026-04-21T10:45:09.208568726Z" level=info msg="TearDown network for sandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" successfully" Apr 21 10:45:09.217628 containerd[1989]: time="2026-04-21T10:45:09.217557037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:45:09.217766 containerd[1989]: time="2026-04-21T10:45:09.217651118Z" level=info msg="RemovePodSandbox \"e2a304a1d5bf8544fd48050f2d4f4e5b99c6276223c505d3823a5a90fcf5e3a0\" returns successfully" Apr 21 10:45:09.220476 containerd[1989]: time="2026-04-21T10:45:09.220422686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.222894 containerd[1989]: time="2026-04-21T10:45:09.222795131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:45:09.222894 containerd[1989]: time="2026-04-21T10:45:09.222847660Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.225863 containerd[1989]: time="2026-04-21T10:45:09.225801420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:09.226750 containerd[1989]: time="2026-04-21T10:45:09.226710777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.433636848s" Apr 21 10:45:09.226844 containerd[1989]: time="2026-04-21T10:45:09.226756831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:45:09.233849 containerd[1989]: time="2026-04-21T10:45:09.232627759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:45:09.277817 containerd[1989]: time="2026-04-21T10:45:09.277776771Z" level=info msg="CreateContainer within sandbox \"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:45:09.296325 containerd[1989]: time="2026-04-21T10:45:09.296188843Z" level=info msg="CreateContainer within sandbox \"c3cbf001a5e85bfc64ac8eb7e08ff56fa5f26715e8e116048ed930cde69ce167\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13\"" Apr 21 10:45:09.303485 containerd[1989]: time="2026-04-21T10:45:09.298853650Z" level=info msg="StartContainer for \"402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13\"" Apr 21 10:45:09.307531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560693036.mount: Deactivated successfully. Apr 21 10:45:09.330732 systemd-networkd[1621]: calicb0f459ab54: Gained IPv6LL Apr 21 10:45:09.354017 systemd[1]: Started cri-containerd-402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13.scope - libcontainer container 402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13. Apr 21 10:45:09.443313 containerd[1989]: time="2026-04-21T10:45:09.443266998Z" level=info msg="StartContainer for \"402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13\" returns successfully" Apr 21 10:45:10.507463 kubelet[3505]: I0421 10:45:10.507358 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d7b6bb87f-lgg8j" podStartSLOduration=42.065835219 podStartE2EDuration="47.507336455s" podCreationTimestamp="2026-04-21 10:44:23 +0000 UTC" firstStartedPulling="2026-04-21 10:45:03.790815007 +0000 UTC m=+58.220840081" lastFinishedPulling="2026-04-21 10:45:09.232316225 +0000 UTC m=+63.662341317" observedRunningTime="2026-04-21 10:45:10.432234749 +0000 UTC m=+64.862259845" watchObservedRunningTime="2026-04-21 10:45:10.507336455 +0000 UTC m=+64.937361603" Apr 21 10:45:12.226233 ntpd[1963]: Listen normally on 9 cali016f805a31a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:45:12.226321 ntpd[1963]: Listen normally on 10 cali11289c37af6 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 9 cali016f805a31a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 10 cali11289c37af6 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 11 cali702fd7589e6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 12 cali1c87931ceda [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 13 caliadd5b5b79e9 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 14 calicb0f459ab54 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 21 10:45:12.226881 ntpd[1963]: 21 Apr 10:45:12 ntpd[1963]: Listen normally on 15 calia534552e5a3 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 21 10:45:12.226366 ntpd[1963]: Listen normally on 11 cali702fd7589e6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:45:12.226407 ntpd[1963]: Listen normally on 12 cali1c87931ceda [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:45:12.226618 ntpd[1963]: Listen normally on 13 caliadd5b5b79e9 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 21 10:45:12.226688 ntpd[1963]: Listen normally on 14 calicb0f459ab54 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 21 10:45:12.226728 ntpd[1963]: Listen normally on 15 calia534552e5a3 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 21 10:45:14.701928 containerd[1989]: time="2026-04-21T10:45:14.701874636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:14.703321 containerd[1989]: time="2026-04-21T10:45:14.703258082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:45:14.704057 containerd[1989]: time="2026-04-21T10:45:14.703990921Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:14.707071 containerd[1989]: time="2026-04-21T10:45:14.707022850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:14.708715 containerd[1989]: time="2026-04-21T10:45:14.707653672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.474980633s" Apr 21 10:45:14.708715 containerd[1989]: time="2026-04-21T10:45:14.707695821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:45:14.712777 containerd[1989]: time="2026-04-21T10:45:14.712741065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:45:14.748634 containerd[1989]: time="2026-04-21T10:45:14.748482656Z" level=info msg="CreateContainer within sandbox \"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:45:14.779403 containerd[1989]: time="2026-04-21T10:45:14.779358765Z" level=info msg="CreateContainer within sandbox \"261a8cb3d94d877405dcbe7b3318f30f23c1443c4778d59848356df7de883038\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7be63b73a17de0d6e2e41cd43047a0d43844d806ca7a3e3b0b2807865be4be1\"" Apr 21 10:45:14.796594 containerd[1989]: time="2026-04-21T10:45:14.795570456Z" level=info msg="StartContainer for \"a7be63b73a17de0d6e2e41cd43047a0d43844d806ca7a3e3b0b2807865be4be1\"" Apr 21 10:45:14.891652 systemd[1]: Started cri-containerd-a7be63b73a17de0d6e2e41cd43047a0d43844d806ca7a3e3b0b2807865be4be1.scope - libcontainer container a7be63b73a17de0d6e2e41cd43047a0d43844d806ca7a3e3b0b2807865be4be1. Apr 21 10:45:15.002130 containerd[1989]: time="2026-04-21T10:45:15.001467120Z" level=info msg="StartContainer for \"a7be63b73a17de0d6e2e41cd43047a0d43844d806ca7a3e3b0b2807865be4be1\" returns successfully" Apr 21 10:45:16.501455 kubelet[3505]: I0421 10:45:16.500544 3505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:45:16.723549 containerd[1989]: time="2026-04-21T10:45:16.723496625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:16.724749 containerd[1989]: time="2026-04-21T10:45:16.724698038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:45:16.725841 containerd[1989]: time="2026-04-21T10:45:16.725533550Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:16.728018 containerd[1989]: time="2026-04-21T10:45:16.727984744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:16.729031 containerd[1989]: time="2026-04-21T10:45:16.728998328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.016215064s" Apr 21 10:45:16.729769 containerd[1989]: time="2026-04-21T10:45:16.729155704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:45:16.730573 containerd[1989]: time="2026-04-21T10:45:16.730548189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:45:16.809788 containerd[1989]: time="2026-04-21T10:45:16.809744139Z" level=info msg="CreateContainer within sandbox \"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:45:16.857488 containerd[1989]: time="2026-04-21T10:45:16.857144119Z" level=info msg="CreateContainer within sandbox \"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"71914d06eae0b9925ca3daee6f1c069fa2b6bfe427e8223034dc8027737f5df8\"" Apr 21 10:45:16.861797 containerd[1989]: time="2026-04-21T10:45:16.861197434Z" level=info msg="StartContainer for \"71914d06eae0b9925ca3daee6f1c069fa2b6bfe427e8223034dc8027737f5df8\"" Apr 21 10:45:16.921277 systemd[1]: Started cri-containerd-71914d06eae0b9925ca3daee6f1c069fa2b6bfe427e8223034dc8027737f5df8.scope - libcontainer container 71914d06eae0b9925ca3daee6f1c069fa2b6bfe427e8223034dc8027737f5df8. Apr 21 10:45:16.960310 containerd[1989]: time="2026-04-21T10:45:16.959916161Z" level=info msg="StartContainer for \"71914d06eae0b9925ca3daee6f1c069fa2b6bfe427e8223034dc8027737f5df8\" returns successfully" Apr 21 10:45:20.010395 systemd[1]: Started sshd@7-172.31.20.236:22-50.85.169.122:58110.service - OpenSSH per-connection server daemon (50.85.169.122:58110). Apr 21 10:45:21.129681 sshd[6425]: Accepted publickey for core from 50.85.169.122 port 58110 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:21.135265 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:21.155787 systemd-logind[1970]: New session 8 of user core. Apr 21 10:45:21.161956 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:45:21.369972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146735856.mount: Deactivated successfully. Apr 21 10:45:22.568904 containerd[1989]: time="2026-04-21T10:45:22.568838882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:22.570581 containerd[1989]: time="2026-04-21T10:45:22.570511420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:45:22.612113 containerd[1989]: time="2026-04-21T10:45:22.611146756Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:22.617777 containerd[1989]: time="2026-04-21T10:45:22.617068934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:22.618282 containerd[1989]: time="2026-04-21T10:45:22.618237918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.887649979s" Apr 21 10:45:22.618397 containerd[1989]: time="2026-04-21T10:45:22.618289207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:45:22.693745 containerd[1989]: time="2026-04-21T10:45:22.693703296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:45:22.727804 sshd[6425]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:22.738916 systemd[1]: sshd@7-172.31.20.236:22-50.85.169.122:58110.service: Deactivated successfully. Apr 21 10:45:22.743177 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:45:22.749277 systemd-logind[1970]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:45:22.751934 systemd-logind[1970]: Removed session 8. Apr 21 10:45:22.903519 containerd[1989]: time="2026-04-21T10:45:22.903381706Z" level=info msg="CreateContainer within sandbox \"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:45:22.947192 containerd[1989]: time="2026-04-21T10:45:22.947140049Z" level=info msg="CreateContainer within sandbox \"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034\"" Apr 21 10:45:22.956711 containerd[1989]: time="2026-04-21T10:45:22.956652326Z" level=info msg="StartContainer for \"bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034\"" Apr 21 10:45:23.138217 systemd[1]: Started cri-containerd-bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034.scope - libcontainer container bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034. Apr 21 10:45:23.228474 containerd[1989]: time="2026-04-21T10:45:23.228325796Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:23.231311 containerd[1989]: time="2026-04-21T10:45:23.231078492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:45:23.246873 containerd[1989]: time="2026-04-21T10:45:23.246822308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 552.83721ms" Apr 21 10:45:23.247192 containerd[1989]: time="2026-04-21T10:45:23.247043647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:45:23.256291 containerd[1989]: time="2026-04-21T10:45:23.255950885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:45:23.269577 containerd[1989]: time="2026-04-21T10:45:23.269537141Z" level=info msg="CreateContainer within sandbox \"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:45:23.272475 containerd[1989]: time="2026-04-21T10:45:23.270679393Z" level=info msg="StartContainer for \"bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034\" returns successfully" Apr 21 10:45:23.305488 containerd[1989]: time="2026-04-21T10:45:23.305385122Z" level=info msg="CreateContainer within sandbox \"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53dbfa621bc230d03fa34828f7dce8929dc3ffee8293c86a5e6fa3c4e88ad887\"" Apr 21 10:45:23.306634 containerd[1989]: time="2026-04-21T10:45:23.306601280Z" level=info msg="StartContainer for \"53dbfa621bc230d03fa34828f7dce8929dc3ffee8293c86a5e6fa3c4e88ad887\"" Apr 21 10:45:23.397697 systemd[1]: Started cri-containerd-53dbfa621bc230d03fa34828f7dce8929dc3ffee8293c86a5e6fa3c4e88ad887.scope - libcontainer container 53dbfa621bc230d03fa34828f7dce8929dc3ffee8293c86a5e6fa3c4e88ad887. Apr 21 10:45:23.451765 containerd[1989]: time="2026-04-21T10:45:23.451723022Z" level=info msg="StartContainer for \"53dbfa621bc230d03fa34828f7dce8929dc3ffee8293c86a5e6fa3c4e88ad887\" returns successfully" Apr 21 10:45:24.045925 kubelet[3505]: I0421 10:45:24.025932 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-747d6cc58b-fvqgg" podStartSLOduration=52.994286827 podStartE2EDuration="1m2.000607136s" podCreationTimestamp="2026-04-21 10:44:22 +0000 UTC" firstStartedPulling="2026-04-21 10:45:05.70266742 +0000 UTC m=+60.132692497" lastFinishedPulling="2026-04-21 10:45:14.70898772 +0000 UTC m=+69.139012806" observedRunningTime="2026-04-21 10:45:15.584536959 +0000 UTC m=+70.014562057" watchObservedRunningTime="2026-04-21 10:45:24.000607136 +0000 UTC m=+78.430632230" Apr 21 10:45:24.069215 kubelet[3505]: I0421 10:45:24.068100 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-gmpsx" podStartSLOduration=47.615393348 podStartE2EDuration="1m2.068080352s" podCreationTimestamp="2026-04-21 10:44:22 +0000 UTC" firstStartedPulling="2026-04-21 10:45:08.228169183 +0000 UTC m=+62.658194268" lastFinishedPulling="2026-04-21 10:45:22.68085618 +0000 UTC m=+77.110881272" observedRunningTime="2026-04-21 10:45:24.046364006 +0000 UTC m=+78.476389093" watchObservedRunningTime="2026-04-21 10:45:24.068080352 +0000 UTC m=+78.498105445" Apr 21 10:45:25.750339 kubelet[3505]: I0421 10:45:25.750255 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-747d6cc58b-rdqmf" podStartSLOduration=48.810965922 podStartE2EDuration="1m3.750231355s" podCreationTimestamp="2026-04-21 10:44:22 +0000 UTC" firstStartedPulling="2026-04-21 10:45:08.316426141 +0000 UTC m=+62.746451220" lastFinishedPulling="2026-04-21 10:45:23.255691566 +0000 UTC m=+77.685716653" observedRunningTime="2026-04-21 10:45:24.102404459 +0000 UTC m=+78.532429552" watchObservedRunningTime="2026-04-21 10:45:25.750231355 +0000 UTC m=+80.180256449" Apr 21 10:45:26.600092 containerd[1989]: time="2026-04-21T10:45:26.598714210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:26.602728 containerd[1989]: time="2026-04-21T10:45:26.602632949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:45:26.607708 containerd[1989]: time="2026-04-21T10:45:26.607664352Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:26.638400 containerd[1989]: time="2026-04-21T10:45:26.638326104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:45:26.656459 containerd[1989]: time="2026-04-21T10:45:26.656269952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.400269664s" Apr 21 10:45:26.656459 containerd[1989]: time="2026-04-21T10:45:26.656319177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:45:26.701648 containerd[1989]: time="2026-04-21T10:45:26.700830859Z" level=info msg="CreateContainer within sandbox \"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:45:26.739300 containerd[1989]: time="2026-04-21T10:45:26.739239851Z" level=info msg="CreateContainer within sandbox \"09a7a65a15abb86b977bc619328574bb6430d07a02fb02ed7b4607bbf16806e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7\"" Apr 21 10:45:26.740327 containerd[1989]: time="2026-04-21T10:45:26.740293400Z" level=info msg="StartContainer for \"3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7\"" Apr 21 10:45:26.821358 systemd[1]: run-containerd-runc-k8s.io-3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7-runc.UI6AgW.mount: Deactivated successfully. Apr 21 10:45:26.836733 systemd[1]: Started cri-containerd-3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7.scope - libcontainer container 3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7. Apr 21 10:45:26.969128 containerd[1989]: time="2026-04-21T10:45:26.968722654Z" level=info msg="StartContainer for \"3b398bfe9e487ed6cd8b0e3ae350c916be7edd999511a2051a2e6e02ea48cbe7\" returns successfully" Apr 21 10:45:27.943860 systemd[1]: Started sshd@8-172.31.20.236:22-50.85.169.122:58114.service - OpenSSH per-connection server daemon (50.85.169.122:58114). Apr 21 10:45:28.170740 kubelet[3505]: I0421 10:45:28.169195 3505 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:45:28.178599 kubelet[3505]: I0421 10:45:28.178561 3505 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:45:29.041898 sshd[6672]: Accepted publickey for core from 50.85.169.122 port 58114 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:29.046798 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:29.053133 systemd-logind[1970]: New session 9 of user core. Apr 21 10:45:29.061699 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:45:30.927094 sshd[6672]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:30.931689 systemd[1]: sshd@8-172.31.20.236:22-50.85.169.122:58114.service: Deactivated successfully. Apr 21 10:45:30.934694 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:45:30.936341 systemd-logind[1970]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:45:30.938222 systemd-logind[1970]: Removed session 9. Apr 21 10:45:36.104845 systemd[1]: Started sshd@9-172.31.20.236:22-50.85.169.122:42426.service - OpenSSH per-connection server daemon (50.85.169.122:42426). Apr 21 10:45:37.151011 sshd[6693]: Accepted publickey for core from 50.85.169.122 port 42426 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:37.151768 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:37.157535 systemd-logind[1970]: New session 10 of user core. Apr 21 10:45:37.161632 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:45:38.214921 sshd[6693]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:38.222788 systemd[1]: sshd@9-172.31.20.236:22-50.85.169.122:42426.service: Deactivated successfully. Apr 21 10:45:38.225556 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:45:38.226975 systemd-logind[1970]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:45:38.228325 systemd-logind[1970]: Removed session 10. Apr 21 10:45:43.395811 systemd[1]: Started sshd@10-172.31.20.236:22-50.85.169.122:57878.service - OpenSSH per-connection server daemon (50.85.169.122:57878). Apr 21 10:45:44.506907 sshd[6753]: Accepted publickey for core from 50.85.169.122 port 57878 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:44.512710 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:44.520198 systemd-logind[1970]: New session 11 of user core. Apr 21 10:45:44.524668 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:45:45.704126 sshd[6753]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:45.710201 systemd-logind[1970]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:45:45.711075 systemd[1]: sshd@10-172.31.20.236:22-50.85.169.122:57878.service: Deactivated successfully. Apr 21 10:45:45.713981 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:45:45.715054 systemd-logind[1970]: Removed session 11. Apr 21 10:45:45.792114 kubelet[3505]: I0421 10:45:45.791901 3505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:45:45.851087 kubelet[3505]: I0421 10:45:45.840378 3505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-68cq7" podStartSLOduration=62.156622439 podStartE2EDuration="1m22.833735614s" podCreationTimestamp="2026-04-21 10:44:23 +0000 UTC" firstStartedPulling="2026-04-21 10:45:05.981807683 +0000 UTC m=+60.411832770" lastFinishedPulling="2026-04-21 10:45:26.658920859 +0000 UTC m=+81.088945945" observedRunningTime="2026-04-21 10:45:28.00604018 +0000 UTC m=+82.436065287" watchObservedRunningTime="2026-04-21 10:45:45.833735614 +0000 UTC m=+100.263760740" Apr 21 10:45:45.887431 systemd[1]: Started sshd@11-172.31.20.236:22-50.85.169.122:57880.service - OpenSSH per-connection server daemon (50.85.169.122:57880). Apr 21 10:45:46.894545 sshd[6773]: Accepted publickey for core from 50.85.169.122 port 57880 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:46.896431 sshd[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:46.901031 systemd-logind[1970]: New session 12 of user core. Apr 21 10:45:46.906667 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:45:47.761052 sshd[6773]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:47.764522 systemd[1]: sshd@11-172.31.20.236:22-50.85.169.122:57880.service: Deactivated successfully. Apr 21 10:45:47.766996 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:45:47.768704 systemd-logind[1970]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:45:47.770711 systemd-logind[1970]: Removed session 12. Apr 21 10:45:47.944776 systemd[1]: Started sshd@12-172.31.20.236:22-50.85.169.122:57890.service - OpenSSH per-connection server daemon (50.85.169.122:57890). Apr 21 10:45:48.984195 sshd[6786]: Accepted publickey for core from 50.85.169.122 port 57890 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:48.985890 sshd[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:48.991552 systemd-logind[1970]: New session 13 of user core. Apr 21 10:45:48.998708 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:45:49.844955 sshd[6786]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:49.851304 systemd[1]: sshd@12-172.31.20.236:22-50.85.169.122:57890.service: Deactivated successfully. Apr 21 10:45:49.853989 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:45:49.855267 systemd-logind[1970]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:45:49.856780 systemd-logind[1970]: Removed session 13. Apr 21 10:45:54.641133 systemd[1]: run-containerd-runc-k8s.io-d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d-runc.jyfnV6.mount: Deactivated successfully. Apr 21 10:45:55.020740 systemd[1]: Started sshd@13-172.31.20.236:22-50.85.169.122:36408.service - OpenSSH per-connection server daemon (50.85.169.122:36408). Apr 21 10:45:56.111104 sshd[6827]: Accepted publickey for core from 50.85.169.122 port 36408 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:56.115375 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:56.130369 systemd-logind[1970]: New session 14 of user core. Apr 21 10:45:56.134641 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:45:57.386817 sshd[6827]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:57.394498 systemd-logind[1970]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:45:57.395639 systemd[1]: sshd@13-172.31.20.236:22-50.85.169.122:36408.service: Deactivated successfully. Apr 21 10:45:57.398061 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:45:57.399202 systemd-logind[1970]: Removed session 14. Apr 21 10:45:57.562310 systemd[1]: Started sshd@14-172.31.20.236:22-50.85.169.122:36414.service - OpenSSH per-connection server daemon (50.85.169.122:36414). Apr 21 10:45:58.027296 systemd[1]: run-containerd-runc-k8s.io-402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13-runc.qDdPzL.mount: Deactivated successfully. Apr 21 10:45:58.601484 sshd[6862]: Accepted publickey for core from 50.85.169.122 port 36414 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:45:58.602981 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:45:58.608530 systemd-logind[1970]: New session 15 of user core. Apr 21 10:45:58.614666 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:45:59.841976 sshd[6862]: pam_unix(sshd:session): session closed for user core Apr 21 10:45:59.845848 systemd[1]: sshd@14-172.31.20.236:22-50.85.169.122:36414.service: Deactivated successfully. Apr 21 10:45:59.848318 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:45:59.850477 systemd-logind[1970]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:45:59.852106 systemd-logind[1970]: Removed session 15. Apr 21 10:46:00.036881 systemd[1]: Started sshd@15-172.31.20.236:22-50.85.169.122:40412.service - OpenSSH per-connection server daemon (50.85.169.122:40412). Apr 21 10:46:01.133808 sshd[6891]: Accepted publickey for core from 50.85.169.122 port 40412 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:01.134617 sshd[6891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:01.140504 systemd-logind[1970]: New session 16 of user core. Apr 21 10:46:01.145716 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:46:03.268258 sshd[6891]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:03.282432 systemd[1]: sshd@15-172.31.20.236:22-50.85.169.122:40412.service: Deactivated successfully. Apr 21 10:46:03.285131 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:46:03.286090 systemd-logind[1970]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:46:03.287393 systemd-logind[1970]: Removed session 16. Apr 21 10:46:03.430799 systemd[1]: Started sshd@16-172.31.20.236:22-50.85.169.122:40424.service - OpenSSH per-connection server daemon (50.85.169.122:40424). Apr 21 10:46:04.467077 sshd[6916]: Accepted publickey for core from 50.85.169.122 port 40424 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:04.470876 sshd[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:04.477861 systemd-logind[1970]: New session 17 of user core. Apr 21 10:46:04.480663 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:46:05.860800 sshd[6916]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:05.869003 systemd[1]: sshd@16-172.31.20.236:22-50.85.169.122:40424.service: Deactivated successfully. Apr 21 10:46:05.874616 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:46:05.876208 systemd-logind[1970]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:46:05.877806 systemd-logind[1970]: Removed session 17. Apr 21 10:46:06.042720 systemd[1]: Started sshd@17-172.31.20.236:22-50.85.169.122:40436.service - OpenSSH per-connection server daemon (50.85.169.122:40436). Apr 21 10:46:07.085770 sshd[6931]: Accepted publickey for core from 50.85.169.122 port 40436 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:07.087410 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:07.094487 systemd-logind[1970]: New session 18 of user core. Apr 21 10:46:07.102303 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:46:07.876777 sshd[6931]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:07.881068 systemd[1]: sshd@17-172.31.20.236:22-50.85.169.122:40436.service: Deactivated successfully. Apr 21 10:46:07.884045 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:46:07.884952 systemd-logind[1970]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:46:07.886226 systemd-logind[1970]: Removed session 18. Apr 21 10:46:09.338239 containerd[1989]: time="2026-04-21T10:46:09.308549447Z" level=info msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:09.989 [WARNING][6951] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"68a2897e-5688-4d69-aa35-2e241e661e25", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d", Pod:"calico-apiserver-747d6cc58b-rdqmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia534552e5a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:09.995 [INFO][6951] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:09.995 [INFO][6951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" iface="eth0" netns="" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:09.995 [INFO][6951] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:09.995 [INFO][6951] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.380 [INFO][6960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.385 [INFO][6960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.386 [INFO][6960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.401 [WARNING][6960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.401 [INFO][6960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.403 [INFO][6960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:46:10.408528 containerd[1989]: 2026-04-21 10:46:10.405 [INFO][6951] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.415322 containerd[1989]: time="2026-04-21T10:46:10.415266302Z" level=info msg="TearDown network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" successfully" Apr 21 10:46:10.415519 containerd[1989]: time="2026-04-21T10:46:10.415496442Z" level=info msg="StopPodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" returns successfully" Apr 21 10:46:10.441911 containerd[1989]: time="2026-04-21T10:46:10.441809883Z" level=info msg="RemovePodSandbox for \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" Apr 21 10:46:10.452922 containerd[1989]: time="2026-04-21T10:46:10.452521144Z" level=info msg="Forcibly stopping sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\"" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.526 [WARNING][6974] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0", GenerateName:"calico-apiserver-747d6cc58b-", Namespace:"calico-system", SelfLink:"", UID:"68a2897e-5688-4d69-aa35-2e241e661e25", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747d6cc58b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"e8d07ab1a6a4b88bae04b7534c2881664c2532ad591ba9519299af6f48a9ab8d", Pod:"calico-apiserver-747d6cc58b-rdqmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia534552e5a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.527 [INFO][6974] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.527 [INFO][6974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" iface="eth0" netns="" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.529 [INFO][6974] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.529 [INFO][6974] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.570 [INFO][6983] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.570 [INFO][6983] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.570 [INFO][6983] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.580 [WARNING][6983] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.581 [INFO][6983] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" HandleID="k8s-pod-network.575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Workload="ip--172--31--20--236-k8s-calico--apiserver--747d6cc58b--rdqmf-eth0" Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.583 [INFO][6983] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:46:10.599171 containerd[1989]: 2026-04-21 10:46:10.594 [INFO][6974] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf" Apr 21 10:46:10.600512 containerd[1989]: time="2026-04-21T10:46:10.600274905Z" level=info msg="TearDown network for sandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" successfully" Apr 21 10:46:10.777319 containerd[1989]: time="2026-04-21T10:46:10.776963818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:46:10.777319 containerd[1989]: time="2026-04-21T10:46:10.777090547Z" level=info msg="RemovePodSandbox \"575dd8f5dc1d435aa63c5919ef74244e405b7ca63d0e014d92c7b7780461b1cf\" returns successfully" Apr 21 10:46:10.779486 containerd[1989]: time="2026-04-21T10:46:10.778846019Z" level=info msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.861 [WARNING][7016] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a", Pod:"goldmane-cccfbd5cf-gmpsx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicb0f459ab54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.862 [INFO][7016] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.862 [INFO][7016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" iface="eth0" netns="" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.862 [INFO][7016] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.862 [INFO][7016] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.888 [INFO][7023] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.888 [INFO][7023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.888 [INFO][7023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.895 [WARNING][7023] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.895 [INFO][7023] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.897 [INFO][7023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:46:10.901993 containerd[1989]: 2026-04-21 10:46:10.899 [INFO][7016] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:10.904238 containerd[1989]: time="2026-04-21T10:46:10.902030493Z" level=info msg="TearDown network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" successfully" Apr 21 10:46:10.904238 containerd[1989]: time="2026-04-21T10:46:10.902062662Z" level=info msg="StopPodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" returns successfully" Apr 21 10:46:10.904238 containerd[1989]: time="2026-04-21T10:46:10.902778189Z" level=info msg="RemovePodSandbox for \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" Apr 21 10:46:10.904238 containerd[1989]: time="2026-04-21T10:46:10.902895642Z" level=info msg="Forcibly stopping sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\"" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.953 [WARNING][7037] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b96b65f2-d7e3-4f8e-880f-b3f8c756fb62", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-236", ContainerID:"979181d557385ba6879a5c5629044911adcd606b0c31fa3494f974185c8c356a", Pod:"goldmane-cccfbd5cf-gmpsx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicb0f459ab54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.953 [INFO][7037] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.953 [INFO][7037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" iface="eth0" netns="" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.953 [INFO][7037] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.953 [INFO][7037] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.985 [INFO][7044] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.985 [INFO][7044] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.986 [INFO][7044] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.997 [WARNING][7044] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:10.997 [INFO][7044] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" HandleID="k8s-pod-network.2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Workload="ip--172--31--20--236-k8s-goldmane--cccfbd5cf--gmpsx-eth0" Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:11.000 [INFO][7044] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:46:11.004700 containerd[1989]: 2026-04-21 10:46:11.002 [INFO][7037] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e" Apr 21 10:46:11.006567 containerd[1989]: time="2026-04-21T10:46:11.004750864Z" level=info msg="TearDown network for sandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" successfully" Apr 21 10:46:11.016381 containerd[1989]: time="2026-04-21T10:46:11.016190474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:46:11.016762 containerd[1989]: time="2026-04-21T10:46:11.016599686Z" level=info msg="RemovePodSandbox \"2c7c3149820859fa1a2c0819e0f5797b417a0ab6c149fa14a30573b967c4c94e\" returns successfully" Apr 21 10:46:13.051003 systemd[1]: Started sshd@18-172.31.20.236:22-50.85.169.122:52452.service - OpenSSH per-connection server daemon (50.85.169.122:52452). Apr 21 10:46:14.158138 sshd[7053]: Accepted publickey for core from 50.85.169.122 port 52452 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:14.161867 sshd[7053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:14.169516 systemd-logind[1970]: New session 19 of user core. Apr 21 10:46:14.176676 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:46:16.020138 sshd[7053]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:16.035575 systemd-logind[1970]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:46:16.037220 systemd[1]: sshd@18-172.31.20.236:22-50.85.169.122:52452.service: Deactivated successfully. Apr 21 10:46:16.041088 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:46:16.042825 systemd-logind[1970]: Removed session 19. Apr 21 10:46:21.202650 systemd[1]: Started sshd@19-172.31.20.236:22-50.85.169.122:49404.service - OpenSSH per-connection server daemon (50.85.169.122:49404). Apr 21 10:46:22.315461 sshd[7082]: Accepted publickey for core from 50.85.169.122 port 49404 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:22.327343 sshd[7082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:22.356806 systemd-logind[1970]: New session 20 of user core. Apr 21 10:46:22.357670 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:46:23.020311 systemd[1]: run-containerd-runc-k8s.io-bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034-runc.oiuPPs.mount: Deactivated successfully. Apr 21 10:46:23.819383 sshd[7082]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:23.833636 systemd-logind[1970]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:46:23.835072 systemd[1]: sshd@19-172.31.20.236:22-50.85.169.122:49404.service: Deactivated successfully. Apr 21 10:46:23.839888 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:46:23.843743 systemd-logind[1970]: Removed session 20. Apr 21 10:46:29.008874 systemd[1]: Started sshd@20-172.31.20.236:22-50.85.169.122:49414.service - OpenSSH per-connection server daemon (50.85.169.122:49414). Apr 21 10:46:30.131849 sshd[7164]: Accepted publickey for core from 50.85.169.122 port 49414 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:30.136126 sshd[7164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:30.142660 systemd-logind[1970]: New session 21 of user core. Apr 21 10:46:30.146627 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:46:31.564035 sshd[7164]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:31.570395 systemd-logind[1970]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:46:31.570855 systemd[1]: sshd@20-172.31.20.236:22-50.85.169.122:49414.service: Deactivated successfully. Apr 21 10:46:31.574975 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:46:31.581510 systemd-logind[1970]: Removed session 21. Apr 21 10:46:36.748847 systemd[1]: Started sshd@21-172.31.20.236:22-50.85.169.122:38582.service - OpenSSH per-connection server daemon (50.85.169.122:38582). Apr 21 10:46:37.785424 sshd[7210]: Accepted publickey for core from 50.85.169.122 port 38582 ssh2: RSA SHA256:K0lTgDmoRERM2v/d48xg9tlwHzsXpjQVTWNBuonNvzE Apr 21 10:46:37.787309 sshd[7210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:46:37.792385 systemd-logind[1970]: New session 22 of user core. Apr 21 10:46:37.795920 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:46:38.659967 sshd[7210]: pam_unix(sshd:session): session closed for user core Apr 21 10:46:38.665407 systemd[1]: sshd@21-172.31.20.236:22-50.85.169.122:38582.service: Deactivated successfully. Apr 21 10:46:38.668003 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:46:38.668957 systemd-logind[1970]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:46:38.670083 systemd-logind[1970]: Removed session 22. Apr 21 10:46:40.437176 systemd[1]: run-containerd-runc-k8s.io-402d8253f94ab5cdb9662904c80a91ffe99472ce353d0e0d9b0cfac8aae6cd13-runc.3H9orQ.mount: Deactivated successfully. Apr 21 10:46:52.972569 systemd[1]: cri-containerd-4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8.scope: Deactivated successfully. Apr 21 10:46:52.972932 systemd[1]: cri-containerd-4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8.scope: Consumed 3.651s CPU time, 15.3M memory peak, 0B memory swap peak. Apr 21 10:46:53.182128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.239714 containerd[1989]: time="2026-04-21T10:46:53.228550170Z" level=info msg="shim disconnected" id=4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8 namespace=k8s.io Apr 21 10:46:53.239714 containerd[1989]: time="2026-04-21T10:46:53.239655624Z" level=warning msg="cleaning up after shim disconnected" id=4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8 namespace=k8s.io Apr 21 10:46:53.239714 containerd[1989]: time="2026-04-21T10:46:53.239684362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.261715 systemd[1]: cri-containerd-99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7.scope: Deactivated successfully. Apr 21 10:46:53.262025 systemd[1]: cri-containerd-99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7.scope: Consumed 9.983s CPU time. Apr 21 10:46:53.352579 containerd[1989]: time="2026-04-21T10:46:53.348860963Z" level=info msg="shim disconnected" id=99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7 namespace=k8s.io Apr 21 10:46:53.352579 containerd[1989]: time="2026-04-21T10:46:53.348956294Z" level=warning msg="cleaning up after shim disconnected" id=99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7 namespace=k8s.io Apr 21 10:46:53.352579 containerd[1989]: time="2026-04-21T10:46:53.348971815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:53.350188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7-rootfs.mount: Deactivated successfully. Apr 21 10:46:53.517928 kubelet[3505]: I0421 10:46:53.517358 3505 scope.go:117] "RemoveContainer" containerID="4915b79bb71322ae2773726f66210ab2913e4b53f7d241d7478122a2c46cb8b8" Apr 21 10:46:53.520886 kubelet[3505]: I0421 10:46:53.517992 3505 scope.go:117] "RemoveContainer" containerID="99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7" Apr 21 10:46:53.580818 containerd[1989]: time="2026-04-21T10:46:53.580658225Z" level=info msg="CreateContainer within sandbox \"49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:46:53.580818 containerd[1989]: time="2026-04-21T10:46:53.580663686Z" level=info msg="CreateContainer within sandbox \"bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:46:53.726261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81915274.mount: Deactivated successfully. Apr 21 10:46:53.745788 containerd[1989]: time="2026-04-21T10:46:53.745731764Z" level=info msg="CreateContainer within sandbox \"49347f268675521861c2d1466a2983756be398ad662125c66c158a1669b7b6aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a\"" Apr 21 10:46:53.753584 containerd[1989]: time="2026-04-21T10:46:53.752734096Z" level=info msg="StartContainer for \"46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a\"" Apr 21 10:46:53.772415 containerd[1989]: time="2026-04-21T10:46:53.772268157Z" level=info msg="CreateContainer within sandbox \"bf461582921c410087d2b54ac3a64dd9fd24685a1d7815d430d098ee9a2af927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d73e57303b26f2a9abbf580ed49bbfada80f9463a848dc0f39c5c59794d3b15a\"" Apr 21 10:46:53.774328 containerd[1989]: time="2026-04-21T10:46:53.774283509Z" level=info msg="StartContainer for \"d73e57303b26f2a9abbf580ed49bbfada80f9463a848dc0f39c5c59794d3b15a\"" Apr 21 10:46:53.834686 systemd[1]: Started cri-containerd-46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a.scope - libcontainer container 46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a. Apr 21 10:46:53.837477 systemd[1]: Started cri-containerd-d73e57303b26f2a9abbf580ed49bbfada80f9463a848dc0f39c5c59794d3b15a.scope - libcontainer container d73e57303b26f2a9abbf580ed49bbfada80f9463a848dc0f39c5c59794d3b15a. Apr 21 10:46:53.908888 containerd[1989]: time="2026-04-21T10:46:53.908828511Z" level=info msg="StartContainer for \"46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a\" returns successfully" Apr 21 10:46:53.929831 containerd[1989]: time="2026-04-21T10:46:53.929781794Z" level=info msg="StartContainer for \"d73e57303b26f2a9abbf580ed49bbfada80f9463a848dc0f39c5c59794d3b15a\" returns successfully" Apr 21 10:46:54.189515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711780395.mount: Deactivated successfully. Apr 21 10:46:54.624783 systemd[1]: run-containerd-runc-k8s.io-d850ad0556a510b7d11b507667f22265bbfca84e8ca33207bec3573cd6b0b20d-runc.RhJSsw.mount: Deactivated successfully. Apr 21 10:46:55.967332 systemd[1]: run-containerd-runc-k8s.io-bfdf78690fa28bb550d5ef116c7a00b7e4e584c73f0a9866914db1d4ca200034-runc.ZBhzsS.mount: Deactivated successfully. Apr 21 10:46:59.082621 kubelet[3505]: E0421 10:46:59.082564 3505 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:46:59.188095 systemd[1]: cri-containerd-1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5.scope: Deactivated successfully. Apr 21 10:46:59.188399 systemd[1]: cri-containerd-1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5.scope: Consumed 1.956s CPU time, 13.9M memory peak, 0B memory swap peak. Apr 21 10:46:59.217022 containerd[1989]: time="2026-04-21T10:46:59.216942566Z" level=info msg="shim disconnected" id=1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5 namespace=k8s.io Apr 21 10:46:59.217022 containerd[1989]: time="2026-04-21T10:46:59.217013165Z" level=warning msg="cleaning up after shim disconnected" id=1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5 namespace=k8s.io Apr 21 10:46:59.217022 containerd[1989]: time="2026-04-21T10:46:59.217025005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:46:59.221868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5-rootfs.mount: Deactivated successfully. Apr 21 10:46:59.520939 kubelet[3505]: I0421 10:46:59.520820 3505 scope.go:117] "RemoveContainer" containerID="1ab8a22184451adcb494343b5f16094135e39d4ad10ac8e0b2f0376c150d92e5" Apr 21 10:46:59.523318 containerd[1989]: time="2026-04-21T10:46:59.523280385Z" level=info msg="CreateContainer within sandbox \"64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:46:59.559670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169294222.mount: Deactivated successfully. Apr 21 10:46:59.563625 containerd[1989]: time="2026-04-21T10:46:59.563380102Z" level=info msg="CreateContainer within sandbox \"64681973a565b13bb5f630f7af8350078b6e9c2cbdfe514c2d880c2c8d5f414d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"40073be91550740eac91ae8ce50f6af3f60a4395d025f84fa2ddf78120516e0b\"" Apr 21 10:46:59.565488 containerd[1989]: time="2026-04-21T10:46:59.564404466Z" level=info msg="StartContainer for \"40073be91550740eac91ae8ce50f6af3f60a4395d025f84fa2ddf78120516e0b\"" Apr 21 10:46:59.608688 systemd[1]: Started cri-containerd-40073be91550740eac91ae8ce50f6af3f60a4395d025f84fa2ddf78120516e0b.scope - libcontainer container 40073be91550740eac91ae8ce50f6af3f60a4395d025f84fa2ddf78120516e0b. Apr 21 10:46:59.662791 containerd[1989]: time="2026-04-21T10:46:59.662741392Z" level=info msg="StartContainer for \"40073be91550740eac91ae8ce50f6af3f60a4395d025f84fa2ddf78120516e0b\" returns successfully" Apr 21 10:47:05.612039 systemd[1]: cri-containerd-46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a.scope: Deactivated successfully. Apr 21 10:47:05.650954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a-rootfs.mount: Deactivated successfully. Apr 21 10:47:05.662885 containerd[1989]: time="2026-04-21T10:47:05.662660041Z" level=info msg="shim disconnected" id=46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a namespace=k8s.io Apr 21 10:47:05.662885 containerd[1989]: time="2026-04-21T10:47:05.662765970Z" level=warning msg="cleaning up after shim disconnected" id=46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a namespace=k8s.io Apr 21 10:47:05.662885 containerd[1989]: time="2026-04-21T10:47:05.662778684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:47:06.565861 kubelet[3505]: I0421 10:47:06.565810 3505 scope.go:117] "RemoveContainer" containerID="99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7" Apr 21 10:47:06.566434 kubelet[3505]: I0421 10:47:06.565967 3505 scope.go:117] "RemoveContainer" containerID="46c3b9f7dcef2201b2c41f77fbde08e0e7def78cd5331dd26bfd2a893656f86a" Apr 21 10:47:06.568811 kubelet[3505]: E0421 10:47:06.568603 3505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-kxzs9_tigera-operator(b9df8161-f168-42ad-bbd6-035d00306582)\"" pod="tigera-operator/tigera-operator-5588576f44-kxzs9" podUID="b9df8161-f168-42ad-bbd6-035d00306582" Apr 21 10:47:06.631120 containerd[1989]: time="2026-04-21T10:47:06.631060295Z" level=info msg="RemoveContainer for \"99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7\"" Apr 21 10:47:06.657733 containerd[1989]: time="2026-04-21T10:47:06.657669175Z" level=info msg="RemoveContainer for \"99c56cf9b036f5fb85d02037b3b490ba1eec0eec1f0fdda6c631cb9330cd41d7\" returns successfully"