Apr 17 23:45:07.959942 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:45:07.959981 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:45:07.962087 kernel: BIOS-provided physical RAM map: Apr 17 23:45:07.962113 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:45:07.962127 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:45:07.962140 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:45:07.962155 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:45:07.962169 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:45:07.962180 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:45:07.962200 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:45:07.962213 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:45:07.962224 kernel: NX (Execute Disable) protection: active Apr 17 23:45:07.962237 kernel: APIC: Static calls initialized Apr 17 23:45:07.962249 kernel: efi: EFI v2.7 by EDK II Apr 17 23:45:07.962264 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:45:07.962280 kernel: SMBIOS 2.7 present. Apr 17 23:45:07.962293 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:45:07.962306 kernel: Hypervisor detected: KVM Apr 17 23:45:07.962319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:45:07.962333 kernel: kvm-clock: using sched offset of 4017899361 cycles Apr 17 23:45:07.962347 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:45:07.962362 kernel: tsc: Detected 2499.996 MHz processor Apr 17 23:45:07.962377 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:45:07.962391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:45:07.962404 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:45:07.962424 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:45:07.962439 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:45:07.962454 kernel: Using GB pages for direct mapping Apr 17 23:45:07.962469 kernel: Secure boot disabled Apr 17 23:45:07.962483 kernel: ACPI: Early table checksum verification disabled Apr 17 23:45:07.962498 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:45:07.962512 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:45:07.962525 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:45:07.962538 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:45:07.962555 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:45:07.962567 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:45:07.962581 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:45:07.962593 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:45:07.962606 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:45:07.962620 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:45:07.962638 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:45:07.962655 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:45:07.962669 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:45:07.962683 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:45:07.962697 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:45:07.962710 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:45:07.962724 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:45:07.962738 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:45:07.962755 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:45:07.962769 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:45:07.962783 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:45:07.962797 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:45:07.962811 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:45:07.962825 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:45:07.962839 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:45:07.962853 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:45:07.962868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:45:07.962885 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:45:07.962898 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:45:07.962913 kernel: Zone ranges: Apr 17 23:45:07.962927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:45:07.962942 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:45:07.962956 kernel: Normal empty Apr 17 23:45:07.962970 kernel: Movable zone start for each node Apr 17 23:45:07.962984 kernel: Early memory node ranges Apr 17 23:45:07.963024 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:45:07.963042 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:45:07.963056 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:45:07.963071 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:45:07.963085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:45:07.963099 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:45:07.963114 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:45:07.963128 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:45:07.963142 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:45:07.963157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:45:07.963174 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:45:07.963188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:45:07.963203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:45:07.963218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:45:07.963232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:45:07.963246 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:45:07.963261 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:45:07.963275 kernel: TSC deadline timer available Apr 17 23:45:07.963290 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:45:07.963304 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:45:07.963322 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:45:07.963337 kernel: Booting paravirtualized kernel on KVM Apr 17 23:45:07.963352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:45:07.963366 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:45:07.963381 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:45:07.963396 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:45:07.963410 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:45:07.963424 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:45:07.963439 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:45:07.963460 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:45:07.963475 kernel: random: crng init done Apr 17 23:45:07.963490 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:45:07.963505 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:45:07.963518 kernel: Fallback order for Node 0: 0 Apr 17 23:45:07.963532 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:45:07.963547 kernel: Policy zone: DMA32 Apr 17 23:45:07.963562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:45:07.963580 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:45:07.963594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:45:07.963609 kernel: Kernel/User page tables isolation: enabled Apr 17 23:45:07.963624 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:45:07.963638 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:45:07.963653 kernel: Dynamic Preempt: voluntary Apr 17 23:45:07.963666 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:45:07.963686 kernel: rcu: RCU event tracing is enabled. Apr 17 23:45:07.963701 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:45:07.963719 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:45:07.963734 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:45:07.963748 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:45:07.963762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:45:07.963776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:45:07.963791 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:45:07.963805 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:45:07.963835 kernel: Console: colour dummy device 80x25 Apr 17 23:45:07.963849 kernel: printk: console [tty0] enabled Apr 17 23:45:07.963864 kernel: printk: console [ttyS0] enabled Apr 17 23:45:07.963878 kernel: ACPI: Core revision 20230628 Apr 17 23:45:07.963893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:45:07.963912 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:45:07.963927 kernel: x2apic enabled Apr 17 23:45:07.963942 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:45:07.963958 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:45:07.963977 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 17 23:45:07.963993 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:45:07.966060 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:45:07.966079 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:45:07.966093 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:45:07.966108 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:45:07.966124 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:45:07.966139 kernel: RETBleed: Vulnerable Apr 17 23:45:07.966155 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:45:07.966170 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:45:07.966185 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:45:07.966207 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:45:07.966223 kernel: active return thunk: its_return_thunk Apr 17 23:45:07.966238 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:45:07.966254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:45:07.966270 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:45:07.966286 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:45:07.966301 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:45:07.966317 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:45:07.966333 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:45:07.966348 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:45:07.966363 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:45:07.966382 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:45:07.966398 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:45:07.966413 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:45:07.966429 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:45:07.966445 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:45:07.966460 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:45:07.966475 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:45:07.966490 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:45:07.966506 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:45:07.966521 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:45:07.966536 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:45:07.966556 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:45:07.966571 kernel: landlock: Up and running. Apr 17 23:45:07.966586 kernel: SELinux: Initializing. Apr 17 23:45:07.966602 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:45:07.966618 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:45:07.966634 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:45:07.966650 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:45:07.966666 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:45:07.966682 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:45:07.966696 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:45:07.966712 kernel: signal: max sigframe size: 3632 Apr 17 23:45:07.966725 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:45:07.966740 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:45:07.966753 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:45:07.966766 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:45:07.966779 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:45:07.966794 kernel: .... node #0, CPUs: #1 Apr 17 23:45:07.966811 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:45:07.966830 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:45:07.966849 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:45:07.966864 kernel: smpboot: Max logical packages: 1 Apr 17 23:45:07.966881 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 17 23:45:07.966897 kernel: devtmpfs: initialized Apr 17 23:45:07.966912 kernel: x86/mm: Memory block size: 128MB Apr 17 23:45:07.966926 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:45:07.966940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:45:07.966955 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:45:07.966971 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:45:07.966990 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:45:07.967035 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:45:07.967064 kernel: audit: type=2000 audit(1776469506.844:1): state=initialized audit_enabled=0 res=1 Apr 17 23:45:07.967081 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:45:07.967096 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:45:07.967112 kernel: cpuidle: using governor menu Apr 17 23:45:07.967128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:45:07.967143 kernel: dca service started, version 1.12.1 Apr 17 23:45:07.967159 kernel: PCI: Using configuration type 1 for base access Apr 17 23:45:07.967179 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:45:07.967195 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:45:07.967211 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:45:07.967226 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:45:07.967242 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:45:07.967258 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:45:07.967273 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:45:07.967288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:45:07.967304 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:45:07.967323 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:45:07.967338 kernel: ACPI: Interpreter enabled Apr 17 23:45:07.967354 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:45:07.967369 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:45:07.967385 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:45:07.967400 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:45:07.967416 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:45:07.967432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:45:07.967669 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:45:07.967824 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:45:07.967959 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:45:07.967977 kernel: acpiphp: Slot [3] registered Apr 17 23:45:07.967992 kernel: acpiphp: Slot [4] registered Apr 17 23:45:07.969079 kernel: acpiphp: Slot [5] registered Apr 17 23:45:07.969095 kernel: acpiphp: Slot [6] registered Apr 17 23:45:07.969109 kernel: acpiphp: Slot [7] registered Apr 17 23:45:07.969127 kernel: acpiphp: Slot [8] registered Apr 17 23:45:07.969140 kernel: acpiphp: Slot [9] registered Apr 17 23:45:07.969153 kernel: acpiphp: Slot [10] registered Apr 17 23:45:07.969167 kernel: acpiphp: Slot [11] registered Apr 17 23:45:07.969180 kernel: acpiphp: Slot [12] registered Apr 17 23:45:07.969197 kernel: acpiphp: Slot [13] registered Apr 17 23:45:07.969212 kernel: acpiphp: Slot [14] registered Apr 17 23:45:07.969228 kernel: acpiphp: Slot [15] registered Apr 17 23:45:07.969244 kernel: acpiphp: Slot [16] registered Apr 17 23:45:07.969263 kernel: acpiphp: Slot [17] registered Apr 17 23:45:07.969278 kernel: acpiphp: Slot [18] registered Apr 17 23:45:07.969294 kernel: acpiphp: Slot [19] registered Apr 17 23:45:07.969310 kernel: acpiphp: Slot [20] registered Apr 17 23:45:07.969326 kernel: acpiphp: Slot [21] registered Apr 17 23:45:07.969341 kernel: acpiphp: Slot [22] registered Apr 17 23:45:07.969357 kernel: acpiphp: Slot [23] registered Apr 17 23:45:07.969372 kernel: acpiphp: Slot [24] registered Apr 17 23:45:07.969388 kernel: acpiphp: Slot [25] registered Apr 17 23:45:07.969403 kernel: acpiphp: Slot [26] registered Apr 17 23:45:07.969422 kernel: acpiphp: Slot [27] registered Apr 17 23:45:07.969437 kernel: acpiphp: Slot [28] registered Apr 17 23:45:07.969452 kernel: acpiphp: Slot [29] registered Apr 17 23:45:07.969468 kernel: acpiphp: Slot [30] registered Apr 17 23:45:07.969483 kernel: acpiphp: Slot [31] registered Apr 17 23:45:07.969498 kernel: PCI host bridge to bus 0000:00 Apr 17 23:45:07.969679 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:45:07.969812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:45:07.969939 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:45:07.970117 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:45:07.971125 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:45:07.971257 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:45:07.971417 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:45:07.971563 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:45:07.971703 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:45:07.971844 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:45:07.971980 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:45:07.973234 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:45:07.973382 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:45:07.973523 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:45:07.973664 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:45:07.973799 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:45:07.973949 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:45:07.974120 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:45:07.974254 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:45:07.974387 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:45:07.974522 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:45:07.974688 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:45:07.974833 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:45:07.974981 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:45:07.976723 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:45:07.976751 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:45:07.976769 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:45:07.976786 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:45:07.976803 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:45:07.976820 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:45:07.976839 kernel: iommu: Default domain type: Translated Apr 17 23:45:07.976853 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:45:07.976868 kernel: efivars: Registered efivars operations Apr 17 23:45:07.976883 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:45:07.976899 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:45:07.976913 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:45:07.976927 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:45:07.977098 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:45:07.977243 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:45:07.977388 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:45:07.977409 kernel: vgaarb: loaded Apr 17 23:45:07.977425 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:45:07.977441 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:45:07.977457 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:45:07.977473 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:45:07.977489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:45:07.977505 kernel: pnp: PnP ACPI init Apr 17 23:45:07.977520 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:45:07.977541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:45:07.977557 kernel: NET: Registered PF_INET protocol family Apr 17 23:45:07.977572 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:45:07.977588 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:45:07.977604 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:45:07.977620 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:45:07.977636 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:45:07.977651 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:45:07.977667 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:45:07.977686 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:45:07.977702 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:45:07.977718 kernel: NET: Registered PF_XDP protocol family Apr 17 23:45:07.977863 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:45:07.977990 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:45:07.980194 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:45:07.980321 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:45:07.980441 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:45:07.980590 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:45:07.980610 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:45:07.980626 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:45:07.980642 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:45:07.980657 kernel: clocksource: Switched to clocksource tsc Apr 17 23:45:07.980672 kernel: Initialise system trusted keyrings Apr 17 23:45:07.980687 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:45:07.980704 kernel: Key type asymmetric registered Apr 17 23:45:07.980724 kernel: Asymmetric key parser 'x509' registered Apr 17 23:45:07.980742 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:45:07.980759 kernel: io scheduler mq-deadline registered Apr 17 23:45:07.980776 kernel: io scheduler kyber registered Apr 17 23:45:07.980793 kernel: io scheduler bfq registered Apr 17 23:45:07.980810 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:45:07.980828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:45:07.980846 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:45:07.980864 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:45:07.980885 kernel: i8042: Warning: Keylock active Apr 17 23:45:07.980902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:45:07.980919 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:45:07.981133 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:45:07.981275 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:45:07.981404 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:45:07 UTC (1776469507) Apr 17 23:45:07.981542 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:45:07.981565 kernel: intel_pstate: CPU model not supported Apr 17 23:45:07.981586 kernel: efifb: probing for efifb Apr 17 23:45:07.981602 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:45:07.981616 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:45:07.981630 kernel: efifb: scrolling: redraw Apr 17 23:45:07.981644 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:45:07.981659 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:45:07.981674 kernel: fb0: EFI VGA frame buffer device Apr 17 23:45:07.981689 kernel: pstore: Using crash dump compression: deflate Apr 17 23:45:07.981704 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:45:07.981722 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:45:07.981736 kernel: Segment Routing with IPv6 Apr 17 23:45:07.981751 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:45:07.981765 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:45:07.981779 kernel: Key type dns_resolver registered Apr 17 23:45:07.981794 kernel: IPI shorthand broadcast: enabled Apr 17 23:45:07.981835 kernel: sched_clock: Marking stable (486002176, 132758600)->(686072010, -67311234) Apr 17 23:45:07.981854 kernel: registered taskstats version 1 Apr 17 23:45:07.981870 kernel: Loading compiled-in X.509 certificates Apr 17 23:45:07.981888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:45:07.981904 kernel: Key type .fscrypt registered Apr 17 23:45:07.981920 kernel: Key type fscrypt-provisioning registered Apr 17 23:45:07.981934 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:45:07.981951 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:45:07.981966 kernel: ima: No architecture policies found Apr 17 23:45:07.981982 kernel: clk: Disabling unused clocks Apr 17 23:45:07.984023 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:45:07.984051 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:45:07.984067 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:45:07.984087 kernel: Run /init as init process Apr 17 23:45:07.984103 kernel: with arguments: Apr 17 23:45:07.984119 kernel: /init Apr 17 23:45:07.984136 kernel: with environment: Apr 17 23:45:07.984152 kernel: HOME=/ Apr 17 23:45:07.984168 kernel: TERM=linux Apr 17 23:45:07.984188 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:45:07.984209 systemd[1]: Detected virtualization amazon. Apr 17 23:45:07.984230 systemd[1]: Detected architecture x86-64. Apr 17 23:45:07.984247 systemd[1]: Running in initrd. Apr 17 23:45:07.984264 systemd[1]: No hostname configured, using default hostname. Apr 17 23:45:07.984281 systemd[1]: Hostname set to . Apr 17 23:45:07.984299 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:45:07.984315 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:45:07.984332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:45:07.984350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:45:07.984372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:45:07.984390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:45:07.984407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:45:07.984428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:45:07.984451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:45:07.984469 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:45:07.984487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:45:07.984505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:45:07.984525 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:45:07.984543 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:45:07.984560 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:45:07.984578 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:45:07.984598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:45:07.984615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:45:07.984633 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:45:07.984650 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:45:07.984667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:45:07.984682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:45:07.984697 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:45:07.984712 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:45:07.984732 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:45:07.984748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:45:07.984764 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:45:07.984779 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:45:07.984795 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:45:07.984811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:45:07.984827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:07.984843 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:45:07.984859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:45:07.984879 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:45:07.984933 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:45:07.984974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:45:07.984992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:07.986072 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:45:07.986093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:45:07.986113 systemd-journald[179]: Journal started Apr 17 23:45:07.986153 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2da0d73b1d76d047dc2e00c5b3d745) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:45:07.964455 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:45:07.997079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:45:08.010734 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:45:08.010806 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:45:08.028291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:45:08.035142 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:45:08.035178 kernel: Bridge firewalling registered Apr 17 23:45:08.034956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:45:08.035980 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:45:08.039280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:45:08.040611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:45:08.048311 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:45:08.052268 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:45:08.053692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:45:08.070472 dracut-cmdline[209]: dracut-dracut-053 Apr 17 23:45:08.075163 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:45:08.076649 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:45:08.087247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:45:08.133124 systemd-resolved[230]: Positive Trust Anchors: Apr 17 23:45:08.134098 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:45:08.134161 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:45:08.141886 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 17 23:45:08.145738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:45:08.146455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:45:08.171035 kernel: SCSI subsystem initialized Apr 17 23:45:08.181033 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:45:08.192163 kernel: iscsi: registered transport (tcp) Apr 17 23:45:08.214323 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:45:08.214408 kernel: QLogic iSCSI HBA Driver Apr 17 23:45:08.253687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:45:08.262301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:45:08.288188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:45:08.288280 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:45:08.291030 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:45:08.333044 kernel: raid6: avx512x4 gen() 17882 MB/s Apr 17 23:45:08.351030 kernel: raid6: avx512x2 gen() 17673 MB/s Apr 17 23:45:08.369034 kernel: raid6: avx512x1 gen() 17649 MB/s Apr 17 23:45:08.387029 kernel: raid6: avx2x4 gen() 17629 MB/s Apr 17 23:45:08.405031 kernel: raid6: avx2x2 gen() 17622 MB/s Apr 17 23:45:08.423338 kernel: raid6: avx2x1 gen() 13724 MB/s Apr 17 23:45:08.423393 kernel: raid6: using algorithm avx512x4 gen() 17882 MB/s Apr 17 23:45:08.442295 kernel: raid6: .... xor() 7612 MB/s, rmw enabled Apr 17 23:45:08.442353 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:45:08.464037 kernel: xor: automatically using best checksumming function avx Apr 17 23:45:08.625031 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:45:08.635845 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:45:08.645382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:45:08.659566 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 17 23:45:08.664788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:45:08.672201 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:45:08.692418 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 17 23:45:08.724088 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:45:08.729397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:45:08.783239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:45:08.793275 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:45:08.820485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:45:08.822878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:45:08.825217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:45:08.825735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:45:08.833340 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:45:08.867337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:45:08.878072 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:45:08.894368 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:45:08.894456 kernel: AES CTR mode by8 optimization enabled Apr 17 23:45:08.915490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:45:08.915656 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:45:08.921490 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:45:08.930747 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:45:08.931060 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:45:08.928354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:45:08.928607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:08.929364 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:08.938711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:08.944747 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:45:08.953021 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c3:e8:68:84:eb Apr 17 23:45:08.957029 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:45:08.957324 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:45:08.963925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:45:08.964077 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:08.973048 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:45:08.973568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:08.982563 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:45:08.982633 kernel: GPT:9289727 != 33554431 Apr 17 23:45:08.982655 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:45:08.982675 kernel: GPT:9289727 != 33554431 Apr 17 23:45:08.982694 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:45:08.982714 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:45:08.992519 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:08.999328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:09.007244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:45:09.046775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:45:09.063025 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Apr 17 23:45:09.081241 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (442) Apr 17 23:45:09.153900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:45:09.164363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:45:09.171281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:45:09.181975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:45:09.182537 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:45:09.187211 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:45:09.197443 disk-uuid[631]: Primary Header is updated. Apr 17 23:45:09.197443 disk-uuid[631]: Secondary Entries is updated. Apr 17 23:45:09.197443 disk-uuid[631]: Secondary Header is updated. Apr 17 23:45:09.205038 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:45:09.212605 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:45:09.221367 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:45:10.232079 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:45:10.232990 disk-uuid[632]: The operation has completed successfully. Apr 17 23:45:10.387044 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:45:10.387185 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:45:10.404268 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:45:10.409185 sh[975]: Success Apr 17 23:45:10.425025 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:45:10.541425 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:45:10.549368 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:45:10.553542 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:45:10.599296 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:45:10.599369 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:45:10.601179 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:45:10.604046 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:45:10.604097 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:45:10.673029 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:45:10.683707 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:45:10.684946 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:45:10.689328 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:45:10.696182 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:45:10.730141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:45:10.730224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:45:10.730249 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:45:10.749185 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:45:10.761343 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:45:10.765247 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:45:10.770027 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:45:10.777392 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:45:10.795161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:45:10.801346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:45:10.830773 systemd-networkd[1167]: lo: Link UP Apr 17 23:45:10.830784 systemd-networkd[1167]: lo: Gained carrier Apr 17 23:45:10.836637 systemd-networkd[1167]: Enumeration completed Apr 17 23:45:10.837975 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:45:10.837980 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:45:10.840119 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:45:10.847086 systemd[1]: Reached target network.target - Network. Apr 17 23:45:10.847934 systemd-networkd[1167]: eth0: Link UP Apr 17 23:45:10.847940 systemd-networkd[1167]: eth0: Gained carrier Apr 17 23:45:10.847956 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:45:10.857210 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.16.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:45:11.002920 ignition[1144]: Ignition 2.19.0 Apr 17 23:45:11.002938 ignition[1144]: Stage: fetch-offline Apr 17 23:45:11.003232 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.003246 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.004658 ignition[1144]: Ignition finished successfully Apr 17 23:45:11.006875 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:45:11.010274 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:45:11.027758 ignition[1178]: Ignition 2.19.0 Apr 17 23:45:11.027773 ignition[1178]: Stage: fetch Apr 17 23:45:11.028259 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.028273 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.028391 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.045200 ignition[1178]: PUT result: OK Apr 17 23:45:11.046878 ignition[1178]: parsed url from cmdline: "" Apr 17 23:45:11.046979 ignition[1178]: no config URL provided Apr 17 23:45:11.046989 ignition[1178]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:45:11.047025 ignition[1178]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:45:11.047044 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.048718 ignition[1178]: PUT result: OK Apr 17 23:45:11.048778 ignition[1178]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:45:11.049569 ignition[1178]: GET result: OK Apr 17 23:45:11.049682 ignition[1178]: parsing config with SHA512: e6e8f54393cfe4c0a25f939a0dd5832bd50d6a11e5ca33986959301a6fa26635bed983d1ee2e294efca3225dc653596227ac468818b01d6cc5c7afade0810407 Apr 17 23:45:11.057891 unknown[1178]: fetched base config from "system" Apr 17 23:45:11.057909 unknown[1178]: fetched base config from "system" Apr 17 23:45:11.058617 ignition[1178]: fetch: fetch complete Apr 17 23:45:11.057917 unknown[1178]: fetched user config from "aws" Apr 17 23:45:11.058625 ignition[1178]: fetch: fetch passed Apr 17 23:45:11.058684 ignition[1178]: Ignition finished successfully Apr 17 23:45:11.061656 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:45:11.069362 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:45:11.085993 ignition[1185]: Ignition 2.19.0 Apr 17 23:45:11.086023 ignition[1185]: Stage: kargs Apr 17 23:45:11.086510 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.086524 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.086641 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.087462 ignition[1185]: PUT result: OK Apr 17 23:45:11.089961 ignition[1185]: kargs: kargs passed Apr 17 23:45:11.090063 ignition[1185]: Ignition finished successfully Apr 17 23:45:11.091479 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:45:11.096213 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:45:11.112180 ignition[1191]: Ignition 2.19.0 Apr 17 23:45:11.112201 ignition[1191]: Stage: disks Apr 17 23:45:11.112672 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.112686 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.112804 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.113757 ignition[1191]: PUT result: OK Apr 17 23:45:11.116504 ignition[1191]: disks: disks passed Apr 17 23:45:11.116580 ignition[1191]: Ignition finished successfully Apr 17 23:45:11.118415 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:45:11.119027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:45:11.119493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:45:11.120069 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:45:11.120608 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:45:11.121298 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:45:11.126251 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:45:11.156941 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:45:11.160490 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:45:11.166271 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:45:11.272386 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:45:11.273270 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:45:11.274373 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:45:11.285287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:45:11.287851 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:45:11.291308 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:45:11.291381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:45:11.291414 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:45:11.305976 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:45:11.312038 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1218) Apr 17 23:45:11.314547 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:45:11.320521 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:45:11.320546 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:45:11.320559 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:45:11.330027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:45:11.331873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:45:11.528926 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:45:11.542460 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:45:11.547941 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:45:11.552767 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:45:11.727854 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:45:11.736173 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:45:11.740341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:45:11.747763 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:45:11.751028 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:45:11.781031 ignition[1330]: INFO : Ignition 2.19.0 Apr 17 23:45:11.783472 ignition[1330]: INFO : Stage: mount Apr 17 23:45:11.783472 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.783472 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.783472 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.788089 ignition[1330]: INFO : PUT result: OK Apr 17 23:45:11.790287 ignition[1330]: INFO : mount: mount passed Apr 17 23:45:11.790287 ignition[1330]: INFO : Ignition finished successfully Apr 17 23:45:11.793653 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:45:11.794382 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:45:11.801522 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:45:11.809498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:45:11.833166 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Apr 17 23:45:11.838115 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:45:11.838203 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:45:11.838226 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:45:11.846042 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:45:11.848191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:45:11.871349 ignition[1361]: INFO : Ignition 2.19.0 Apr 17 23:45:11.872211 ignition[1361]: INFO : Stage: files Apr 17 23:45:11.872763 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:11.872763 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:11.872763 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:11.875205 ignition[1361]: INFO : PUT result: OK Apr 17 23:45:11.878577 ignition[1361]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:45:11.879959 ignition[1361]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:45:11.879959 ignition[1361]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:45:11.900254 systemd-networkd[1167]: eth0: Gained IPv6LL Apr 17 23:45:11.907685 ignition[1361]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:45:11.908782 ignition[1361]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:45:11.908782 ignition[1361]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:45:11.908286 unknown[1361]: wrote ssh authorized keys file for user: core Apr 17 23:45:11.912749 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:45:11.912749 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:45:11.912749 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:45:11.912749 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:45:12.011145 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:45:12.158808 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:45:12.160296 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:45:12.170350 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:45:12.470264 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:45:13.252794 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:45:13.252794 ignition[1361]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:45:13.255940 ignition[1361]: INFO : files: files passed Apr 17 23:45:13.255940 ignition[1361]: INFO : Ignition finished successfully Apr 17 23:45:13.257795 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:45:13.266337 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:45:13.273248 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:45:13.276585 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:45:13.276744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:45:13.297826 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:45:13.297826 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:45:13.301301 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:45:13.303369 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:45:13.304125 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:45:13.309310 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:45:13.345697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:45:13.345833 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:45:13.347076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:45:13.348269 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:45:13.349217 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:45:13.356236 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:45:13.369563 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:45:13.376266 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:45:13.388755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:45:13.389599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:45:13.390588 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:45:13.391466 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:45:13.391647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:45:13.392881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:45:13.393860 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:45:13.394672 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:45:13.395452 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:45:13.396216 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:45:13.396973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:45:13.397828 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:45:13.398617 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:45:13.399761 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:45:13.400515 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:45:13.401346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:45:13.401524 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:45:13.402626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:45:13.403432 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:45:13.404120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:45:13.404261 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:45:13.404936 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:45:13.405260 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:45:13.406546 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:45:13.406771 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:45:13.407452 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:45:13.407600 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:45:13.414322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:45:13.417203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:45:13.420185 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:45:13.421169 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:45:13.423334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:45:13.424153 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:45:13.433370 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:45:13.433517 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:45:13.437433 ignition[1415]: INFO : Ignition 2.19.0 Apr 17 23:45:13.437433 ignition[1415]: INFO : Stage: umount Apr 17 23:45:13.437433 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:45:13.437433 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:45:13.437433 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:45:13.440475 ignition[1415]: INFO : PUT result: OK Apr 17 23:45:13.442136 ignition[1415]: INFO : umount: umount passed Apr 17 23:45:13.443097 ignition[1415]: INFO : Ignition finished successfully Apr 17 23:45:13.445258 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:45:13.445410 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:45:13.446808 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:45:13.446927 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:45:13.449802 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:45:13.449884 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:45:13.451586 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:45:13.451663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:45:13.453229 systemd[1]: Stopped target network.target - Network. Apr 17 23:45:13.453673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:45:13.453739 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:45:13.456130 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:45:13.456570 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:45:13.460081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:45:13.460606 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:45:13.461147 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:45:13.461642 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:45:13.461698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:45:13.464122 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:45:13.464175 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:45:13.465206 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:45:13.465285 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:45:13.465814 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:45:13.465881 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:45:13.466609 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:45:13.467294 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:45:13.469913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:45:13.470066 systemd-networkd[1167]: eth0: DHCPv6 lease lost Apr 17 23:45:13.473543 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:45:13.473660 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:45:13.474964 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:45:13.475296 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:45:13.477924 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:45:13.478262 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:45:13.483168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:45:13.484227 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:45:13.484322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:45:13.485226 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:45:13.485292 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:45:13.488145 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:45:13.488212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:45:13.488646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:45:13.488705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:45:13.489507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:45:13.509220 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:45:13.509467 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:45:13.511672 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:45:13.511805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:45:13.513374 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:45:13.513471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:45:13.513952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:45:13.513997 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:45:13.514708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:45:13.514771 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:45:13.515863 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:45:13.515927 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:45:13.517100 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:45:13.517168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:45:13.522321 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:45:13.522938 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:45:13.523037 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:45:13.523736 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:45:13.523798 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:45:13.524425 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:45:13.524483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:45:13.526215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:45:13.526274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:13.536606 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:45:13.536729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:45:13.809232 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:45:13.809375 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:45:13.810540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:45:13.811251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:45:13.811322 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:45:13.818174 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:45:13.830140 systemd[1]: Switching root. Apr 17 23:45:13.858470 systemd-journald[179]: Journal stopped Apr 17 23:45:15.420052 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:45:15.420158 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:45:15.420183 kernel: SELinux: policy capability open_perms=1 Apr 17 23:45:15.420204 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:45:15.420231 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:45:15.420255 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:45:15.420282 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:45:15.420308 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:45:15.420329 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:45:15.420350 kernel: audit: type=1403 audit(1776469514.348:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:45:15.420372 systemd[1]: Successfully loaded SELinux policy in 43.651ms. Apr 17 23:45:15.420407 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.961ms. Apr 17 23:45:15.420436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:45:15.420460 systemd[1]: Detected virtualization amazon. Apr 17 23:45:15.420482 systemd[1]: Detected architecture x86-64. Apr 17 23:45:15.420503 systemd[1]: Detected first boot. Apr 17 23:45:15.420526 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:45:15.420549 zram_generator::config[1475]: No configuration found. Apr 17 23:45:15.420572 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:45:15.420596 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:45:15.420621 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:45:15.420648 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:45:15.420671 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:45:15.420693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:45:15.420714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:45:15.420736 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:45:15.420758 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:45:15.420782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:45:15.420804 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:45:15.420826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:45:15.420847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:45:15.420869 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:45:15.420890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:45:15.420912 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:45:15.420934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:45:15.420956 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:45:15.420978 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:45:15.421023 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:45:15.421049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:45:15.421071 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:45:15.421093 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:45:15.423574 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:45:15.423621 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:45:15.423644 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:45:15.423667 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:45:15.423690 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:45:15.423719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:45:15.423742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:45:15.423763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:45:15.423785 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:45:15.423806 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:45:15.423828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:45:15.423850 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:45:15.423873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:15.423894 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:45:15.423920 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:45:15.423942 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:45:15.423963 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:45:15.423985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:45:15.436073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:45:15.436119 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:45:15.436149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:45:15.436170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:45:15.436191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:45:15.436217 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:45:15.436239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:45:15.436257 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:45:15.436275 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:45:15.436297 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:45:15.436317 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:45:15.436338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:45:15.436358 kernel: loop: module loaded Apr 17 23:45:15.436380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:45:15.436403 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:45:15.436469 systemd-journald[1582]: Collecting audit messages is disabled. Apr 17 23:45:15.436509 kernel: fuse: init (API version 7.39) Apr 17 23:45:15.436530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:45:15.436552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:15.436574 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:45:15.436595 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:45:15.436619 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:45:15.436640 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:45:15.436662 systemd-journald[1582]: Journal started Apr 17 23:45:15.436703 systemd-journald[1582]: Runtime Journal (/run/log/journal/ec2da0d73b1d76d047dc2e00c5b3d745) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:45:15.449261 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:45:15.444145 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:45:15.446248 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:45:15.448532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:45:15.450972 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:45:15.453478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:45:15.453735 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:45:15.456656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:45:15.456886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:45:15.458506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:45:15.458735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:45:15.460861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:45:15.461107 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:45:15.461852 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:45:15.462250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:45:15.463415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:45:15.464772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:45:15.465921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:45:15.481696 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:45:15.483084 kernel: ACPI: bus type drm_connector registered Apr 17 23:45:15.494172 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:45:15.508239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:45:15.508997 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:45:15.513953 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:45:15.522188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:45:15.524199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:45:15.536225 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:45:15.537407 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:45:15.542988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:45:15.553905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:45:15.559379 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:45:15.569199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:45:15.572767 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:45:15.574656 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:45:15.575883 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:45:15.580706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:45:15.583517 systemd-journald[1582]: Time spent on flushing to /var/log/journal/ec2da0d73b1d76d047dc2e00c5b3d745 is 38.296ms for 974 entries. Apr 17 23:45:15.583517 systemd-journald[1582]: System Journal (/var/log/journal/ec2da0d73b1d76d047dc2e00c5b3d745) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:45:15.633228 systemd-journald[1582]: Received client request to flush runtime journal. Apr 17 23:45:15.635891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:45:15.639755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:45:15.650915 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:45:15.666547 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:45:15.675473 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Apr 17 23:45:15.675894 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Apr 17 23:45:15.693511 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:45:15.702511 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:45:15.703629 udevadm[1640]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:45:15.754797 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:45:15.768264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:45:15.791623 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Apr 17 23:45:15.792110 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Apr 17 23:45:15.800569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:45:16.292271 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:45:16.299347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:45:16.326710 systemd-udevd[1655]: Using default interface naming scheme 'v255'. Apr 17 23:45:16.383469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:45:16.394229 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:45:16.415587 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:45:16.458438 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:45:16.470707 (udev-worker)[1662]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:16.523265 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:45:16.575048 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:45:16.598023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:45:16.625102 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 17 23:45:16.646028 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:45:16.651063 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 17 23:45:16.656047 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:45:16.665866 systemd-networkd[1658]: lo: Link UP Apr 17 23:45:16.666285 systemd-networkd[1658]: lo: Gained carrier Apr 17 23:45:16.668565 systemd-networkd[1658]: Enumeration completed Apr 17 23:45:16.668733 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:45:16.671219 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:45:16.673122 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:45:16.671229 systemd-networkd[1658]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:45:16.677257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:45:16.678238 systemd-networkd[1658]: eth0: Link UP Apr 17 23:45:16.678496 systemd-networkd[1658]: eth0: Gained carrier Apr 17 23:45:16.678522 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:45:16.690188 systemd-networkd[1658]: eth0: DHCPv4 address 172.31.16.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:45:16.703536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:16.716857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:45:16.717278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:16.727245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:45:16.743029 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1662) Apr 17 23:45:16.877865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:45:16.900479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:45:16.907946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:45:16.917201 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:45:16.937373 lvm[1783]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:45:16.963313 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:45:16.964953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:45:16.973361 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:45:16.978733 lvm[1786]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:45:17.004349 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:45:17.005990 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:45:17.006711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:45:17.006773 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:45:17.007358 systemd[1]: Reached target machines.target - Containers. Apr 17 23:45:17.009603 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:45:17.021290 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:45:17.024647 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:45:17.027384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:45:17.037476 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:45:17.042236 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:45:17.052216 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:45:17.055115 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:45:17.070592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:45:17.084027 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:45:17.092895 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:45:17.093963 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:45:17.168119 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:45:17.192031 kernel: loop1: detected capacity change from 0 to 61336 Apr 17 23:45:17.255028 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:45:17.358024 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 23:45:17.449178 kernel: loop4: detected capacity change from 0 to 142488 Apr 17 23:45:17.483220 kernel: loop5: detected capacity change from 0 to 61336 Apr 17 23:45:17.503038 kernel: loop6: detected capacity change from 0 to 140768 Apr 17 23:45:17.537168 kernel: loop7: detected capacity change from 0 to 228704 Apr 17 23:45:17.570457 (sd-merge)[1808]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:45:17.571226 (sd-merge)[1808]: Merged extensions into '/usr'. Apr 17 23:45:17.576797 systemd[1]: Reloading requested from client PID 1794 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:45:17.576815 systemd[1]: Reloading... Apr 17 23:45:17.681028 zram_generator::config[1836]: No configuration found. Apr 17 23:45:17.852155 systemd-networkd[1658]: eth0: Gained IPv6LL Apr 17 23:45:17.852566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:17.956045 systemd[1]: Reloading finished in 378 ms. Apr 17 23:45:17.975112 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:45:17.977689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:45:17.991346 systemd[1]: Starting ensure-sysext.service... Apr 17 23:45:17.999254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:45:18.009703 systemd[1]: Reloading requested from client PID 1895 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:45:18.009727 systemd[1]: Reloading... Apr 17 23:45:18.037304 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:45:18.039613 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:45:18.042702 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:45:18.043217 systemd-tmpfiles[1896]: ACLs are not supported, ignoring. Apr 17 23:45:18.043318 systemd-tmpfiles[1896]: ACLs are not supported, ignoring. Apr 17 23:45:18.049907 ldconfig[1790]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:45:18.052162 systemd-tmpfiles[1896]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:45:18.052432 systemd-tmpfiles[1896]: Skipping /boot Apr 17 23:45:18.079989 systemd-tmpfiles[1896]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:45:18.082258 systemd-tmpfiles[1896]: Skipping /boot Apr 17 23:45:18.111159 zram_generator::config[1925]: No configuration found. Apr 17 23:45:18.266401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:18.341673 systemd[1]: Reloading finished in 331 ms. Apr 17 23:45:18.361952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:45:18.363367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:45:18.383305 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:45:18.388210 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:45:18.399200 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:45:18.407775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:45:18.417703 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:45:18.434208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.434684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:45:18.439795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:45:18.454351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:45:18.462424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:45:18.464327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:45:18.464617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.479737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.480185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:45:18.480523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:45:18.480762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.490386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.490790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:45:18.500492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:45:18.502781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:45:18.503102 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:45:18.505787 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:45:18.511866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:45:18.513317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:45:18.513578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:45:18.516852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:45:18.517343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:45:18.518501 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:45:18.518722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:45:18.520695 augenrules[2016]: No rules Apr 17 23:45:18.526638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:45:18.531903 systemd[1]: Finished ensure-sysext.service. Apr 17 23:45:18.536825 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:45:18.539870 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:45:18.551058 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:45:18.558899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:45:18.559051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:45:18.566579 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:45:18.597308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:45:18.610444 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:45:18.612755 systemd-resolved[1997]: Positive Trust Anchors: Apr 17 23:45:18.612767 systemd-resolved[1997]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:45:18.612828 systemd-resolved[1997]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:45:18.614061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:45:18.620250 systemd-resolved[1997]: Defaulting to hostname 'linux'. Apr 17 23:45:18.622350 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:45:18.622916 systemd[1]: Reached target network.target - Network. Apr 17 23:45:18.623361 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:45:18.623740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:45:18.624155 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:45:18.624622 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:45:18.625122 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:45:18.625652 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:45:18.626127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:45:18.626491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:45:18.626862 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:45:18.626911 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:45:18.627272 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:45:18.628190 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:45:18.630145 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:45:18.631758 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:45:18.635269 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:45:18.635831 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:45:18.636460 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:45:18.637419 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:45:18.637485 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:45:18.637524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:45:18.641181 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:45:18.645300 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:45:18.650190 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:45:18.657221 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:45:18.662908 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:45:18.665779 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:45:18.689590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:18.708204 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:45:18.719208 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:45:18.724415 jq[2045]: false Apr 17 23:45:18.726598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:45:18.743175 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:45:18.752457 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:45:18.769967 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:45:18.786849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:45:18.787937 dbus-daemon[2043]: [system] SELinux support is enabled Apr 17 23:45:18.792653 dbus-daemon[2043]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1658 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:45:18.803101 extend-filesystems[2046]: Found loop4 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found loop5 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found loop6 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found loop7 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p1 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p2 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p3 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found usr Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p4 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p6 Apr 17 23:45:18.803101 extend-filesystems[2046]: Found nvme0n1p7 Apr 17 23:45:18.821382 extend-filesystems[2046]: Found nvme0n1p9 Apr 17 23:45:18.821382 extend-filesystems[2046]: Checking size of /dev/nvme0n1p9 Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: ---------------------------------------------------- Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: corporation. Support and training for ntp-4 are Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: available at https://www.nwtime.org/support Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: ---------------------------------------------------- Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: proto: precision = 0.075 usec (-24) Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: basedate set to 2026-04-05 Apr 17 23:45:18.823110 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: gps base set to 2026-04-05 (week 2413) Apr 17 23:45:18.815877 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:45:18.806401 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:45:18.815906 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:45:18.809744 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:45:18.815918 ntpd[2052]: ---------------------------------------------------- Apr 17 23:45:18.815930 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:45:18.815942 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:45:18.815953 ntpd[2052]: corporation. Support and training for ntp-4 are Apr 17 23:45:18.815964 ntpd[2052]: available at https://www.nwtime.org/support Apr 17 23:45:18.815977 ntpd[2052]: ---------------------------------------------------- Apr 17 23:45:18.818454 ntpd[2052]: proto: precision = 0.075 usec (-24) Apr 17 23:45:18.820303 ntpd[2052]: basedate set to 2026-04-05 Apr 17 23:45:18.820324 ntpd[2052]: gps base set to 2026-04-05 (week 2413) Apr 17 23:45:18.825321 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen normally on 3 eth0 172.31.16.149:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen normally on 4 lo [::1]:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listen normally on 5 eth0 [fe80::4c3:e8ff:fe68:84eb%2]:123 Apr 17 23:45:18.827115 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Apr 17 23:45:18.825371 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:45:18.825580 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:45:18.825626 ntpd[2052]: Listen normally on 3 eth0 172.31.16.149:123 Apr 17 23:45:18.825668 ntpd[2052]: Listen normally on 4 lo [::1]:123 Apr 17 23:45:18.825727 ntpd[2052]: Listen normally on 5 eth0 [fe80::4c3:e8ff:fe68:84eb%2]:123 Apr 17 23:45:18.825788 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Apr 17 23:45:18.828120 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:45:18.828216 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:45:18.828301 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:45:18.828367 ntpd[2052]: 17 Apr 23:45:18 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:45:18.830194 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:45:18.846140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:45:18.851079 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:45:18.866550 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:45:18.866917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:45:18.878557 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:45:18.878928 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:45:18.903182 jq[2075]: true Apr 17 23:45:18.909801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:45:18.910175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:45:18.936662 extend-filesystems[2046]: Resized partition /dev/nvme0n1p9 Apr 17 23:45:18.946695 extend-filesystems[2095]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:45:18.946719 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:45:18.984041 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:45:18.984158 coreos-metadata[2042]: Apr 17 23:45:18.982 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:45:18.984158 coreos-metadata[2042]: Apr 17 23:45:18.983 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:45:18.994683 update_engine[2072]: I20260417 23:45:18.966332 2072 main.cc:92] Flatcar Update Engine starting Apr 17 23:45:18.994683 update_engine[2072]: I20260417 23:45:18.987063 2072 update_check_scheduler.cc:74] Next update check in 6m8s Apr 17 23:45:18.994208 (ntainerd)[2098]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:45:19.004914 jq[2094]: true Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.985 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.985 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.985 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.985 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.986 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.986 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.988 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.990 INFO Fetch failed with 404: resource not found Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.990 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.991 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.991 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.992 INFO Fetch successful Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.992 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:45:19.005063 coreos-metadata[2042]: Apr 17 23:45:18.993 INFO Fetch successful Apr 17 23:45:19.077915 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:45:19.092042 tar[2085]: linux-amd64/LICENSE Apr 17 23:45:19.091739 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:45:19.094044 tar[2085]: linux-amd64/helm Apr 17 23:45:19.096145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:45:19.096311 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:45:19.110262 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:45:19.111040 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:45:19.111231 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:45:19.113529 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:45:19.130570 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:45:19.134074 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:45:19.135573 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:45:19.183210 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:45:19.185889 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:45:19.196710 systemd-logind[2068]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 23:45:19.196736 systemd-logind[2068]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 17 23:45:19.196759 systemd-logind[2068]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:45:19.198204 systemd-logind[2068]: New seat seat0. Apr 17 23:45:19.204768 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:45:19.243115 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1662) Apr 17 23:45:19.281917 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:45:19.306025 bash[2152]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:45:19.310226 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:45:19.319923 extend-filesystems[2095]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:45:19.319923 extend-filesystems[2095]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:45:19.319923 extend-filesystems[2095]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:45:19.330233 extend-filesystems[2046]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:45:19.323084 systemd[1]: Starting sshkeys.service... Apr 17 23:45:19.328721 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:45:19.335273 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:45:19.426883 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:45:19.436060 amazon-ssm-agent[2148]: Initializing new seelog logger Apr 17 23:45:19.437776 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:45:19.451114 amazon-ssm-agent[2148]: New Seelog Logger Creation Complete Apr 17 23:45:19.451329 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.452036 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.452687 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 processing appconfig overrides Apr 17 23:45:19.471699 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO Proxy environment variables: Apr 17 23:45:19.475838 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.475838 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.475838 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 processing appconfig overrides Apr 17 23:45:19.476073 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.476073 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.476073 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 processing appconfig overrides Apr 17 23:45:19.511014 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.511014 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:45:19.511014 amazon-ssm-agent[2148]: 2026/04/17 23:45:19 processing appconfig overrides Apr 17 23:45:19.580036 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO no_proxy: Apr 17 23:45:19.675141 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:45:19.678248 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:45:19.685621 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO https_proxy: Apr 17 23:45:19.692226 dbus-daemon[2043]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2136 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:45:19.717435 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:45:19.745061 containerd[2098]: time="2026-04-17T23:45:19.742588277Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:45:19.782586 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO http_proxy: Apr 17 23:45:19.799290 polkitd[2235]: Started polkitd version 121 Apr 17 23:45:19.822508 coreos-metadata[2190]: Apr 17 23:45:19.822 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:45:19.827045 coreos-metadata[2190]: Apr 17 23:45:19.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:45:19.825582 polkitd[2235]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:45:19.827146 locksmithd[2139]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:45:19.825666 polkitd[2235]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:45:19.832546 coreos-metadata[2190]: Apr 17 23:45:19.827 INFO Fetch successful Apr 17 23:45:19.832546 coreos-metadata[2190]: Apr 17 23:45:19.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:45:19.830113 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:45:19.828760 polkitd[2235]: Finished loading, compiling and executing 2 rules Apr 17 23:45:19.829606 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:45:19.830123 polkitd[2235]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:45:19.839677 coreos-metadata[2190]: Apr 17 23:45:19.833 INFO Fetch successful Apr 17 23:45:19.843115 unknown[2190]: wrote ssh authorized keys file for user: core Apr 17 23:45:19.890437 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:45:19.893129 update-ssh-keys[2264]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:45:19.901329 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:45:19.930565 systemd[1]: Finished sshkeys.service. Apr 17 23:45:19.951769 systemd-hostnamed[2136]: Hostname set to (transient) Apr 17 23:45:19.951890 systemd-resolved[1997]: System hostname changed to 'ip-172-31-16-149'. Apr 17 23:45:19.966028 containerd[2098]: time="2026-04-17T23:45:19.964197316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966278472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966323933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966348570Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966533790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966555342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966624328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:45:19.966747 containerd[2098]: time="2026-04-17T23:45:19.966643062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.967052 containerd[2098]: time="2026-04-17T23:45:19.966926942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:45:19.967052 containerd[2098]: time="2026-04-17T23:45:19.966952213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.967052 containerd[2098]: time="2026-04-17T23:45:19.966972906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:45:19.967052 containerd[2098]: time="2026-04-17T23:45:19.966988723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.967205 containerd[2098]: time="2026-04-17T23:45:19.967104693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.968120 containerd[2098]: time="2026-04-17T23:45:19.967357644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:45:19.968120 containerd[2098]: time="2026-04-17T23:45:19.967583638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:45:19.968120 containerd[2098]: time="2026-04-17T23:45:19.967605636Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:45:19.968120 containerd[2098]: time="2026-04-17T23:45:19.967706776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:45:19.968120 containerd[2098]: time="2026-04-17T23:45:19.967763037Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:45:19.977790 containerd[2098]: time="2026-04-17T23:45:19.977598678Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:45:19.977790 containerd[2098]: time="2026-04-17T23:45:19.977684495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:45:19.977790 containerd[2098]: time="2026-04-17T23:45:19.977707495Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:45:19.977790 containerd[2098]: time="2026-04-17T23:45:19.977773376Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:45:19.977790 containerd[2098]: time="2026-04-17T23:45:19.977795350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:45:19.978075 containerd[2098]: time="2026-04-17T23:45:19.978014018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978514983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978662519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978685116Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978706785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978728867Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978749145Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978769484Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978790506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978814896Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978834576Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978852921Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978875345Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978911917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.979091 containerd[2098]: time="2026-04-17T23:45:19.978932361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.979675 containerd[2098]: time="2026-04-17T23:45:19.978950855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.979675 containerd[2098]: time="2026-04-17T23:45:19.978978027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.978995604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.980954935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.980994707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981033271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981055587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981080749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981100738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981120204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981141961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981180121Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981218108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981237601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981255068Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:45:19.987086 containerd[2098]: time="2026-04-17T23:45:19.981321861Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981347205Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981365242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981383290Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981400209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981419987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981436173Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:45:19.987657 containerd[2098]: time="2026-04-17T23:45:19.981452049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:45:19.987915 containerd[2098]: time="2026-04-17T23:45:19.981886539Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:45:19.987915 containerd[2098]: time="2026-04-17T23:45:19.981978729Z" level=info msg="Connect containerd service" Apr 17 23:45:19.987915 containerd[2098]: time="2026-04-17T23:45:19.983131045Z" level=info msg="using legacy CRI server" Apr 17 23:45:19.987915 containerd[2098]: time="2026-04-17T23:45:19.983152278Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:45:19.987915 containerd[2098]: time="2026-04-17T23:45:19.983294305Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:45:19.995424 containerd[2098]: time="2026-04-17T23:45:19.992638155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996183829Z" level=info msg="Start subscribing containerd event" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996269169Z" level=info msg="Start recovering state" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996372142Z" level=info msg="Start event monitor" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996398297Z" level=info msg="Start snapshots syncer" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996414888Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996433650Z" level=info msg="Start streaming server" Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996781756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996839477Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:45:19.998369 containerd[2098]: time="2026-04-17T23:45:19.996909846Z" level=info msg="containerd successfully booted in 0.258113s" Apr 17 23:45:19.997269 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:45:20.000253 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:45:20.099096 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO Agent will take identity from EC2 Apr 17 23:45:20.198306 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:45:20.297628 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:45:20.361953 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [Registrar] Starting registrar module Apr 17 23:45:20.362106 amazon-ssm-agent[2148]: 2026-04-17 23:45:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:45:20.362358 amazon-ssm-agent[2148]: 2026-04-17 23:45:20 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:45:20.362400 amazon-ssm-agent[2148]: 2026-04-17 23:45:20 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:45:20.362400 amazon-ssm-agent[2148]: 2026-04-17 23:45:20 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:45:20.362466 amazon-ssm-agent[2148]: 2026-04-17 23:45:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:45:20.394231 sshd_keygen[2092]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:45:20.399395 amazon-ssm-agent[2148]: 2026-04-17 23:45:20 INFO [CredentialRefresher] Next credential rotation will be in 32.13331263831667 minutes Apr 17 23:45:20.434827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:45:20.447618 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:45:20.465935 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:45:20.466317 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:45:20.481163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:45:20.499245 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:45:20.506685 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:45:20.518571 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:45:20.519893 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:45:20.695165 tar[2085]: linux-amd64/README.md Apr 17 23:45:20.711154 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:45:21.381511 amazon-ssm-agent[2148]: 2026-04-17 23:45:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:45:21.485200 amazon-ssm-agent[2148]: 2026-04-17 23:45:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2314) started Apr 17 23:45:21.487297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:21.489508 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:45:21.491691 systemd[1]: Startup finished in 7.265s (kernel) + 7.184s (userspace) = 14.449s. Apr 17 23:45:21.500816 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:21.586229 amazon-ssm-agent[2148]: 2026-04-17 23:45:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:45:22.194371 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:45:22.201617 systemd[1]: Started sshd@0-172.31.16.149:22-20.229.252.112:50370.service - OpenSSH per-connection server daemon (20.229.252.112:50370). Apr 17 23:45:22.464655 kubelet[2329]: E0417 23:45:22.464507 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:22.467415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:22.467733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:23.218677 sshd[2340]: Accepted publickey for core from 20.229.252.112 port 50370 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:23.222305 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:23.233613 systemd-logind[2068]: New session 1 of user core. Apr 17 23:45:23.235225 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:45:23.240324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:45:23.256554 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:45:23.266084 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:45:23.278301 (systemd)[2350]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:45:23.399636 systemd[2350]: Queued start job for default target default.target. Apr 17 23:45:23.400563 systemd[2350]: Created slice app.slice - User Application Slice. Apr 17 23:45:23.400601 systemd[2350]: Reached target paths.target - Paths. Apr 17 23:45:23.400621 systemd[2350]: Reached target timers.target - Timers. Apr 17 23:45:23.405280 systemd[2350]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:45:23.415226 systemd[2350]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:45:23.415315 systemd[2350]: Reached target sockets.target - Sockets. Apr 17 23:45:23.415335 systemd[2350]: Reached target basic.target - Basic System. Apr 17 23:45:23.415558 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:45:23.417181 systemd[2350]: Reached target default.target - Main User Target. Apr 17 23:45:23.417241 systemd[2350]: Startup finished in 131ms. Apr 17 23:45:23.423519 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:45:24.121397 systemd[1]: Started sshd@1-172.31.16.149:22-20.229.252.112:50386.service - OpenSSH per-connection server daemon (20.229.252.112:50386). Apr 17 23:45:25.096093 sshd[2363]: Accepted publickey for core from 20.229.252.112 port 50386 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:25.097774 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:25.102475 systemd-logind[2068]: New session 2 of user core. Apr 17 23:45:25.109448 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:45:25.776604 sshd[2363]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:25.781764 systemd[1]: sshd@1-172.31.16.149:22-20.229.252.112:50386.service: Deactivated successfully. Apr 17 23:45:25.783075 systemd-logind[2068]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:45:25.785838 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:45:25.788122 systemd-logind[2068]: Removed session 2. Apr 17 23:45:25.958407 systemd[1]: Started sshd@2-172.31.16.149:22-20.229.252.112:33716.service - OpenSSH per-connection server daemon (20.229.252.112:33716). Apr 17 23:45:26.961572 sshd[2371]: Accepted publickey for core from 20.229.252.112 port 33716 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:26.962261 sshd[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:26.967650 systemd-logind[2068]: New session 3 of user core. Apr 17 23:45:26.973456 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:45:27.655222 sshd[2371]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:27.661501 systemd-logind[2068]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:45:27.661719 systemd[1]: sshd@2-172.31.16.149:22-20.229.252.112:33716.service: Deactivated successfully. Apr 17 23:45:27.664975 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:45:27.666244 systemd-logind[2068]: Removed session 3. Apr 17 23:45:27.828513 systemd[1]: Started sshd@3-172.31.16.149:22-20.229.252.112:33722.service - OpenSSH per-connection server daemon (20.229.252.112:33722). Apr 17 23:45:28.831469 sshd[2379]: Accepted publickey for core from 20.229.252.112 port 33722 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:28.832260 sshd[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:28.837552 systemd-logind[2068]: New session 4 of user core. Apr 17 23:45:28.847550 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:45:29.531722 sshd[2379]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:29.537312 systemd[1]: sshd@3-172.31.16.149:22-20.229.252.112:33722.service: Deactivated successfully. Apr 17 23:45:29.539651 systemd-logind[2068]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:45:29.541381 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:45:29.542589 systemd-logind[2068]: Removed session 4. Apr 17 23:45:29.694357 systemd[1]: Started sshd@4-172.31.16.149:22-20.229.252.112:33726.service - OpenSSH per-connection server daemon (20.229.252.112:33726). Apr 17 23:45:30.664165 sshd[2387]: Accepted publickey for core from 20.229.252.112 port 33726 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:30.665731 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:30.671297 systemd-logind[2068]: New session 5 of user core. Apr 17 23:45:30.677407 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:45:31.211358 sudo[2391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:45:31.211766 sudo[2391]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:45:31.228059 sudo[2391]: pam_unix(sudo:session): session closed for user root Apr 17 23:45:31.387134 sshd[2387]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:31.390939 systemd[1]: sshd@4-172.31.16.149:22-20.229.252.112:33726.service: Deactivated successfully. Apr 17 23:45:31.396914 systemd-logind[2068]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:45:31.398154 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:45:31.399265 systemd-logind[2068]: Removed session 5. Apr 17 23:45:31.552409 systemd[1]: Started sshd@5-172.31.16.149:22-20.229.252.112:33732.service - OpenSSH per-connection server daemon (20.229.252.112:33732). Apr 17 23:45:32.512960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:45:32.519278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:32.523553 sshd[2396]: Accepted publickey for core from 20.229.252.112 port 33732 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:32.525045 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:32.539898 systemd-logind[2068]: New session 6 of user core. Apr 17 23:45:32.543318 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:45:33.044906 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:45:33.045357 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:45:33.049795 sudo[2405]: pam_unix(sudo:session): session closed for user root Apr 17 23:45:33.055523 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:45:33.055923 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:45:33.072437 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:45:33.074684 auditctl[2408]: No rules Apr 17 23:45:33.076028 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:45:33.076389 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:45:33.085515 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:45:33.126472 augenrules[2427]: No rules Apr 17 23:45:33.128349 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:45:33.132899 sudo[2404]: pam_unix(sudo:session): session closed for user root Apr 17 23:45:33.292098 sshd[2396]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:33.296850 systemd[1]: sshd@5-172.31.16.149:22-20.229.252.112:33732.service: Deactivated successfully. Apr 17 23:45:33.301799 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:45:33.302784 systemd-logind[2068]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:45:33.303834 systemd-logind[2068]: Removed session 6. Apr 17 23:45:33.434260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:33.448660 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:33.460458 systemd[1]: Started sshd@6-172.31.16.149:22-20.229.252.112:33742.service - OpenSSH per-connection server daemon (20.229.252.112:33742). Apr 17 23:45:33.504822 kubelet[2444]: E0417 23:45:33.504783 2444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:33.511293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:33.511536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:34.449624 sshd[2449]: Accepted publickey for core from 20.229.252.112 port 33742 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:34.451211 sshd[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:34.457302 systemd-logind[2068]: New session 7 of user core. Apr 17 23:45:34.466445 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:45:34.969887 sudo[2456]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:45:34.970311 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:45:35.411406 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:45:35.411658 (dockerd)[2474]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:45:35.847930 dockerd[2474]: time="2026-04-17T23:45:35.847665951Z" level=info msg="Starting up" Apr 17 23:45:36.040296 dockerd[2474]: time="2026-04-17T23:45:36.040244609Z" level=info msg="Loading containers: start." Apr 17 23:45:36.169029 kernel: Initializing XFRM netlink socket Apr 17 23:45:36.203051 (udev-worker)[2496]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:36.259841 systemd-networkd[1658]: docker0: Link UP Apr 17 23:45:36.278807 dockerd[2474]: time="2026-04-17T23:45:36.278746707Z" level=info msg="Loading containers: done." Apr 17 23:45:36.307471 dockerd[2474]: time="2026-04-17T23:45:36.307411449Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:45:36.307893 dockerd[2474]: time="2026-04-17T23:45:36.307541592Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:45:36.307893 dockerd[2474]: time="2026-04-17T23:45:36.307680294Z" level=info msg="Daemon has completed initialization" Apr 17 23:45:36.341707 dockerd[2474]: time="2026-04-17T23:45:36.341050593Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:45:36.341417 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:45:37.159038 containerd[2098]: time="2026-04-17T23:45:37.158981884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:45:37.704574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145077046.mount: Deactivated successfully. Apr 17 23:45:39.323934 containerd[2098]: time="2026-04-17T23:45:39.323881870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:39.325656 containerd[2098]: time="2026-04-17T23:45:39.325446237Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 17 23:45:39.328506 containerd[2098]: time="2026-04-17T23:45:39.326948652Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:39.331102 containerd[2098]: time="2026-04-17T23:45:39.331053708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:39.332083 containerd[2098]: time="2026-04-17T23:45:39.332050079Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.173018466s" Apr 17 23:45:39.332181 containerd[2098]: time="2026-04-17T23:45:39.332090115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:45:39.332752 containerd[2098]: time="2026-04-17T23:45:39.332724404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:45:41.123626 containerd[2098]: time="2026-04-17T23:45:41.123564754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:41.126377 containerd[2098]: time="2026-04-17T23:45:41.126321637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 17 23:45:41.134032 containerd[2098]: time="2026-04-17T23:45:41.132217224Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:41.140473 containerd[2098]: time="2026-04-17T23:45:41.139937054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:41.141517 containerd[2098]: time="2026-04-17T23:45:41.141470140Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.808709259s" Apr 17 23:45:41.141633 containerd[2098]: time="2026-04-17T23:45:41.141522207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:45:41.142156 containerd[2098]: time="2026-04-17T23:45:41.142110389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:45:42.595816 containerd[2098]: time="2026-04-17T23:45:42.595752534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:42.597524 containerd[2098]: time="2026-04-17T23:45:42.597442902Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 17 23:45:42.599891 containerd[2098]: time="2026-04-17T23:45:42.599847731Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:42.603579 containerd[2098]: time="2026-04-17T23:45:42.603506865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:42.604987 containerd[2098]: time="2026-04-17T23:45:42.604829022Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.46265994s" Apr 17 23:45:42.604987 containerd[2098]: time="2026-04-17T23:45:42.604877362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:45:42.605786 containerd[2098]: time="2026-04-17T23:45:42.605757857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:45:43.513081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:45:43.521311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:43.776604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:43.792162 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:43.854664 kubelet[2691]: E0417 23:45:43.854618 2691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:43.858134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:43.858389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:43.977137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971927607.mount: Deactivated successfully. Apr 17 23:45:44.607629 containerd[2098]: time="2026-04-17T23:45:44.607438822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.609661 containerd[2098]: time="2026-04-17T23:45:44.609429661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 17 23:45:44.612247 containerd[2098]: time="2026-04-17T23:45:44.611884518Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.615648 containerd[2098]: time="2026-04-17T23:45:44.615600700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.616534 containerd[2098]: time="2026-04-17T23:45:44.616493113Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 2.010699549s" Apr 17 23:45:44.616645 containerd[2098]: time="2026-04-17T23:45:44.616532190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:45:44.617406 containerd[2098]: time="2026-04-17T23:45:44.617351466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:45:45.144877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042465507.mount: Deactivated successfully. Apr 17 23:45:46.427649 containerd[2098]: time="2026-04-17T23:45:46.427591426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.429584 containerd[2098]: time="2026-04-17T23:45:46.429511205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 17 23:45:46.432634 containerd[2098]: time="2026-04-17T23:45:46.432596227Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.435904 containerd[2098]: time="2026-04-17T23:45:46.435843304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.437947 containerd[2098]: time="2026-04-17T23:45:46.436961654Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.819574087s" Apr 17 23:45:46.437947 containerd[2098]: time="2026-04-17T23:45:46.437038308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:45:46.437947 containerd[2098]: time="2026-04-17T23:45:46.437543025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:45:46.905516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009748348.mount: Deactivated successfully. Apr 17 23:45:46.917783 containerd[2098]: time="2026-04-17T23:45:46.917729266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.919802 containerd[2098]: time="2026-04-17T23:45:46.919583779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 17 23:45:46.921962 containerd[2098]: time="2026-04-17T23:45:46.921892694Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.926282 containerd[2098]: time="2026-04-17T23:45:46.925226181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.926282 containerd[2098]: time="2026-04-17T23:45:46.926112952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.536515ms" Apr 17 23:45:46.926282 containerd[2098]: time="2026-04-17T23:45:46.926150870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:45:46.927197 containerd[2098]: time="2026-04-17T23:45:46.927168465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:45:47.473141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258353515.mount: Deactivated successfully. Apr 17 23:45:48.900690 containerd[2098]: time="2026-04-17T23:45:48.900633871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.902733 containerd[2098]: time="2026-04-17T23:45:48.902469051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 17 23:45:48.905569 containerd[2098]: time="2026-04-17T23:45:48.904938921Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.909517 containerd[2098]: time="2026-04-17T23:45:48.909464763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.910892 containerd[2098]: time="2026-04-17T23:45:48.910849740Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.983644907s" Apr 17 23:45:48.910995 containerd[2098]: time="2026-04-17T23:45:48.910897343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:45:49.960374 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:45:51.593875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:51.600396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:51.636563 systemd[1]: Reloading requested from client PID 2858 ('systemctl') (unit session-7.scope)... Apr 17 23:45:51.636582 systemd[1]: Reloading... Apr 17 23:45:51.779030 zram_generator::config[2901]: No configuration found. Apr 17 23:45:51.943705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:52.028312 systemd[1]: Reloading finished in 391 ms. Apr 17 23:45:52.075792 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:45:52.075980 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:45:52.077253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:52.083834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:52.624245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:52.635610 (kubelet)[2971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:45:52.686082 kubelet[2971]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:45:52.687783 kubelet[2971]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:45:52.687783 kubelet[2971]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:45:52.687783 kubelet[2971]: I0417 23:45:52.686547 2971 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:45:53.260096 kubelet[2971]: I0417 23:45:53.260052 2971 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:45:53.260096 kubelet[2971]: I0417 23:45:53.260086 2971 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:45:53.260378 kubelet[2971]: I0417 23:45:53.260358 2971 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:45:53.303193 kubelet[2971]: E0417 23:45:53.303038 2971 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:45:53.307447 kubelet[2971]: I0417 23:45:53.307409 2971 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:45:53.319427 kubelet[2971]: E0417 23:45:53.319373 2971 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:45:53.319427 kubelet[2971]: I0417 23:45:53.319419 2971 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:45:53.328355 kubelet[2971]: I0417 23:45:53.328327 2971 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:45:53.332162 kubelet[2971]: I0417 23:45:53.332119 2971 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:45:53.336023 kubelet[2971]: I0417 23:45:53.332155 2971 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:45:53.336927 kubelet[2971]: I0417 23:45:53.336894 2971 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:45:53.336927 kubelet[2971]: I0417 23:45:53.336930 2971 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:45:53.339202 kubelet[2971]: I0417 23:45:53.339160 2971 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:53.346714 kubelet[2971]: I0417 23:45:53.346672 2971 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:45:53.346714 kubelet[2971]: I0417 23:45:53.346711 2971 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:45:53.347262 kubelet[2971]: I0417 23:45:53.346747 2971 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:45:53.347262 kubelet[2971]: I0417 23:45:53.346770 2971 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:45:53.357611 kubelet[2971]: I0417 23:45:53.356822 2971 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:45:53.357611 kubelet[2971]: I0417 23:45:53.357528 2971 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:45:53.359571 kubelet[2971]: W0417 23:45:53.358726 2971 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:45:53.359865 kubelet[2971]: E0417 23:45:53.359830 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-149&limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:45:53.359987 kubelet[2971]: E0417 23:45:53.359962 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:45:53.365569 kubelet[2971]: I0417 23:45:53.365544 2971 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:45:53.365685 kubelet[2971]: I0417 23:45:53.365602 2971 server.go:1289] "Started kubelet" Apr 17 23:45:53.365834 kubelet[2971]: I0417 23:45:53.365804 2971 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:45:53.367486 kubelet[2971]: I0417 23:45:53.366758 2971 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:45:53.369935 kubelet[2971]: I0417 23:45:53.369223 2971 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:45:53.369935 kubelet[2971]: I0417 23:45:53.369648 2971 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:45:53.371684 kubelet[2971]: E0417 23:45:53.369791 2971 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.149:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-149.18a749a58aa6b294 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-149,UID:ip-172-31-16-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-149,},FirstTimestamp:2026-04-17 23:45:53.365562004 +0000 UTC m=+0.725171679,LastTimestamp:2026-04-17 23:45:53.365562004 +0000 UTC m=+0.725171679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-149,}" Apr 17 23:45:53.377013 kubelet[2971]: I0417 23:45:53.376322 2971 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:45:53.377665 kubelet[2971]: I0417 23:45:53.377639 2971 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:45:53.380827 kubelet[2971]: E0417 23:45:53.380800 2971 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-149\" not found" Apr 17 23:45:53.380935 kubelet[2971]: I0417 23:45:53.380851 2971 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:45:53.381200 kubelet[2971]: I0417 23:45:53.381182 2971 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:45:53.381280 kubelet[2971]: I0417 23:45:53.381252 2971 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:45:53.383247 kubelet[2971]: E0417 23:45:53.382118 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:45:53.383247 kubelet[2971]: I0417 23:45:53.382425 2971 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:45:53.383247 kubelet[2971]: I0417 23:45:53.382565 2971 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:45:53.383247 kubelet[2971]: E0417 23:45:53.383098 2971 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:45:53.384831 kubelet[2971]: I0417 23:45:53.384811 2971 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:45:53.402303 kubelet[2971]: E0417 23:45:53.402242 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": dial tcp 172.31.16.149:6443: connect: connection refused" interval="200ms" Apr 17 23:45:53.420379 kubelet[2971]: I0417 23:45:53.420134 2971 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:45:53.420379 kubelet[2971]: I0417 23:45:53.420155 2971 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:45:53.420379 kubelet[2971]: I0417 23:45:53.420176 2971 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:53.423190 kubelet[2971]: I0417 23:45:53.422811 2971 policy_none.go:49] "None policy: Start" Apr 17 23:45:53.423190 kubelet[2971]: I0417 23:45:53.422836 2971 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:45:53.423190 kubelet[2971]: I0417 23:45:53.422849 2971 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:45:53.427891 kubelet[2971]: I0417 23:45:53.427852 2971 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:45:53.429678 kubelet[2971]: I0417 23:45:53.429655 2971 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:45:53.429800 kubelet[2971]: I0417 23:45:53.429791 2971 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:45:53.430278 kubelet[2971]: I0417 23:45:53.429889 2971 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:45:53.430278 kubelet[2971]: I0417 23:45:53.429901 2971 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:45:53.430278 kubelet[2971]: E0417 23:45:53.429947 2971 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:45:53.440240 kubelet[2971]: E0417 23:45:53.440204 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:45:53.440931 kubelet[2971]: E0417 23:45:53.440905 2971 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:45:53.444258 kubelet[2971]: I0417 23:45:53.444217 2971 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:45:53.444371 kubelet[2971]: I0417 23:45:53.444257 2971 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:45:53.445232 kubelet[2971]: I0417 23:45:53.445214 2971 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:45:53.446294 kubelet[2971]: E0417 23:45:53.446269 2971 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:45:53.446389 kubelet[2971]: E0417 23:45:53.446325 2971 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-149\" not found" Apr 17 23:45:53.537931 kubelet[2971]: E0417 23:45:53.537714 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:53.543584 kubelet[2971]: E0417 23:45:53.543555 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:53.546859 kubelet[2971]: E0417 23:45:53.546826 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:53.547682 kubelet[2971]: I0417 23:45:53.547655 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:45:53.548070 kubelet[2971]: E0417 23:45:53.548015 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.149:6443/api/v1/nodes\": dial tcp 172.31.16.149:6443: connect: connection refused" node="ip-172-31-16-149" Apr 17 23:45:53.582577 kubelet[2971]: I0417 23:45:53.582520 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:45:53.582577 kubelet[2971]: I0417 23:45:53.582569 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:53.582781 kubelet[2971]: I0417 23:45:53.582595 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:53.582781 kubelet[2971]: I0417 23:45:53.582617 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:53.582781 kubelet[2971]: I0417 23:45:53.582644 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c423c1b470255d46d3c8d7b0929d5082-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-149\" (UID: \"c423c1b470255d46d3c8d7b0929d5082\") " pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:45:53.582781 kubelet[2971]: I0417 23:45:53.582667 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:45:53.582781 kubelet[2971]: I0417 23:45:53.582686 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:53.583102 kubelet[2971]: I0417 23:45:53.582709 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:53.583102 kubelet[2971]: I0417 23:45:53.582732 2971 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-ca-certs\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:45:53.603420 kubelet[2971]: E0417 23:45:53.603371 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": dial tcp 172.31.16.149:6443: connect: connection refused" interval="400ms" Apr 17 23:45:53.750597 kubelet[2971]: I0417 23:45:53.750564 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:45:53.751303 kubelet[2971]: E0417 23:45:53.751083 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.149:6443/api/v1/nodes\": dial tcp 172.31.16.149:6443: connect: connection refused" node="ip-172-31-16-149" Apr 17 23:45:53.839067 containerd[2098]: time="2026-04-17T23:45:53.838937098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-149,Uid:718491c74db8c0338a4a39a0ea7c4535,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:53.845670 containerd[2098]: time="2026-04-17T23:45:53.845621012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-149,Uid:0ac938fe675be16203ae7be6208a3001,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:53.848470 containerd[2098]: time="2026-04-17T23:45:53.848431014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-149,Uid:c423c1b470255d46d3c8d7b0929d5082,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:54.004350 kubelet[2971]: E0417 23:45:54.004291 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": dial tcp 172.31.16.149:6443: connect: connection refused" interval="800ms" Apr 17 23:45:54.152911 kubelet[2971]: I0417 23:45:54.152810 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:45:54.153232 kubelet[2971]: E0417 23:45:54.153186 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.149:6443/api/v1/nodes\": dial tcp 172.31.16.149:6443: connect: connection refused" node="ip-172-31-16-149" Apr 17 23:45:54.260681 kubelet[2971]: E0417 23:45:54.260629 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:45:54.284184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34233785.mount: Deactivated successfully. Apr 17 23:45:54.291087 containerd[2098]: time="2026-04-17T23:45:54.291033075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:54.292189 containerd[2098]: time="2026-04-17T23:45:54.292136671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:45:54.293154 containerd[2098]: time="2026-04-17T23:45:54.293119620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:54.294119 containerd[2098]: time="2026-04-17T23:45:54.294087730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:54.295794 containerd[2098]: time="2026-04-17T23:45:54.295754689Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:54.296899 containerd[2098]: time="2026-04-17T23:45:54.296788315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:45:54.298451 containerd[2098]: time="2026-04-17T23:45:54.298343440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:45:54.300255 containerd[2098]: time="2026-04-17T23:45:54.300152135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:54.301655 containerd[2098]: time="2026-04-17T23:45:54.301417355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 455.710978ms" Apr 17 23:45:54.303334 containerd[2098]: time="2026-04-17T23:45:54.303302587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.788281ms" Apr 17 23:45:54.305955 containerd[2098]: time="2026-04-17T23:45:54.305918957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.879562ms" Apr 17 23:45:54.329909 kubelet[2971]: E0417 23:45:54.329733 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:45:54.335056 kubelet[2971]: E0417 23:45:54.334980 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:45:54.550307 containerd[2098]: time="2026-04-17T23:45:54.550191788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:54.550307 containerd[2098]: time="2026-04-17T23:45:54.550274591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:54.551825 containerd[2098]: time="2026-04-17T23:45:54.550806890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.551825 containerd[2098]: time="2026-04-17T23:45:54.550977361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.552458 containerd[2098]: time="2026-04-17T23:45:54.552240048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:54.552458 containerd[2098]: time="2026-04-17T23:45:54.552292836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:54.552458 containerd[2098]: time="2026-04-17T23:45:54.552309224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.553479 containerd[2098]: time="2026-04-17T23:45:54.552406157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.554969 containerd[2098]: time="2026-04-17T23:45:54.554695182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:54.554969 containerd[2098]: time="2026-04-17T23:45:54.554763505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:54.554969 containerd[2098]: time="2026-04-17T23:45:54.554788457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.554969 containerd[2098]: time="2026-04-17T23:45:54.554897089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:54.675029 containerd[2098]: time="2026-04-17T23:45:54.672241116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-149,Uid:0ac938fe675be16203ae7be6208a3001,Namespace:kube-system,Attempt:0,} returns sandbox id \"f049c7a9ef246dcc32dbca6a96c78a6d92d539ac02ce16ee3f1a4781cf8e0616\"" Apr 17 23:45:54.687031 containerd[2098]: time="2026-04-17T23:45:54.685937605Z" level=info msg="CreateContainer within sandbox \"f049c7a9ef246dcc32dbca6a96c78a6d92d539ac02ce16ee3f1a4781cf8e0616\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:45:54.700666 containerd[2098]: time="2026-04-17T23:45:54.700626527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-149,Uid:718491c74db8c0338a4a39a0ea7c4535,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d67ee4fda1ea3b5c824b52e6750ec44952fa7524fda98ca4ca60be27b318856\"" Apr 17 23:45:54.713319 containerd[2098]: time="2026-04-17T23:45:54.713288693Z" level=info msg="CreateContainer within sandbox \"4d67ee4fda1ea3b5c824b52e6750ec44952fa7524fda98ca4ca60be27b318856\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:45:54.713627 containerd[2098]: time="2026-04-17T23:45:54.713521960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-149,Uid:c423c1b470255d46d3c8d7b0929d5082,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9d672d9a3efd9d8980e56f38a8adee8d12c45c70a890d3689e38657a07ac42\"" Apr 17 23:45:54.720640 containerd[2098]: time="2026-04-17T23:45:54.720610303Z" level=info msg="CreateContainer within sandbox \"9c9d672d9a3efd9d8980e56f38a8adee8d12c45c70a890d3689e38657a07ac42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:45:54.725382 containerd[2098]: time="2026-04-17T23:45:54.725334927Z" level=info msg="CreateContainer within sandbox \"f049c7a9ef246dcc32dbca6a96c78a6d92d539ac02ce16ee3f1a4781cf8e0616\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a\"" Apr 17 23:45:54.726088 containerd[2098]: time="2026-04-17T23:45:54.726058271Z" level=info msg="StartContainer for \"01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a\"" Apr 17 23:45:54.737418 containerd[2098]: time="2026-04-17T23:45:54.737385630Z" level=info msg="CreateContainer within sandbox \"4d67ee4fda1ea3b5c824b52e6750ec44952fa7524fda98ca4ca60be27b318856\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"003cc45a5653489c079416cd16cb3d568fe4b43a1faeae5ec900ac12f80d68a1\"" Apr 17 23:45:54.739158 containerd[2098]: time="2026-04-17T23:45:54.739127384Z" level=info msg="StartContainer for \"003cc45a5653489c079416cd16cb3d568fe4b43a1faeae5ec900ac12f80d68a1\"" Apr 17 23:45:54.742256 containerd[2098]: time="2026-04-17T23:45:54.742218776Z" level=info msg="CreateContainer within sandbox \"9c9d672d9a3efd9d8980e56f38a8adee8d12c45c70a890d3689e38657a07ac42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e\"" Apr 17 23:45:54.745502 containerd[2098]: time="2026-04-17T23:45:54.745470883Z" level=info msg="StartContainer for \"fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e\"" Apr 17 23:45:54.805750 kubelet[2971]: E0417 23:45:54.805619 2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": dial tcp 172.31.16.149:6443: connect: connection refused" interval="1.6s" Apr 17 23:45:54.847029 kubelet[2971]: E0417 23:45:54.843070 2971 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-149&limit=500&resourceVersion=0\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:45:54.860849 containerd[2098]: time="2026-04-17T23:45:54.860302569Z" level=info msg="StartContainer for \"01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a\" returns successfully" Apr 17 23:45:54.926106 containerd[2098]: time="2026-04-17T23:45:54.925482494Z" level=info msg="StartContainer for \"fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e\" returns successfully" Apr 17 23:45:54.951534 containerd[2098]: time="2026-04-17T23:45:54.950107241Z" level=info msg="StartContainer for \"003cc45a5653489c079416cd16cb3d568fe4b43a1faeae5ec900ac12f80d68a1\" returns successfully" Apr 17 23:45:54.966600 kubelet[2971]: I0417 23:45:54.966568 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:45:54.967840 kubelet[2971]: E0417 23:45:54.967803 2971 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.149:6443/api/v1/nodes\": dial tcp 172.31.16.149:6443: connect: connection refused" node="ip-172-31-16-149" Apr 17 23:45:55.411430 kubelet[2971]: E0417 23:45:55.411382 2971 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:45:55.457934 kubelet[2971]: E0417 23:45:55.457901 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:55.458435 kubelet[2971]: E0417 23:45:55.458411 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:55.464568 kubelet[2971]: E0417 23:45:55.464537 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:56.468033 kubelet[2971]: E0417 23:45:56.467602 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:56.468928 kubelet[2971]: E0417 23:45:56.468668 2971 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:56.575331 kubelet[2971]: I0417 23:45:56.575296 2971 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:45:57.513273 kubelet[2971]: E0417 23:45:57.513226 2971 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-149\" not found" node="ip-172-31-16-149" Apr 17 23:45:57.544354 kubelet[2971]: E0417 23:45:57.544258 2971 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-149.18a749a58aa6b294 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-149,UID:ip-172-31-16-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-149,},FirstTimestamp:2026-04-17 23:45:53.365562004 +0000 UTC m=+0.725171679,LastTimestamp:2026-04-17 23:45:53.365562004 +0000 UTC m=+0.725171679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-149,}" Apr 17 23:45:57.598667 kubelet[2971]: E0417 23:45:57.598565 2971 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-149.18a749a58b4b3d0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-149,UID:ip-172-31-16-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:ip-172-31-16-149,},FirstTimestamp:2026-04-17 23:45:53.376345356 +0000 UTC m=+0.735955028,LastTimestamp:2026-04-17 23:45:53.376345356 +0000 UTC m=+0.735955028,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-149,}" Apr 17 23:45:57.609388 kubelet[2971]: I0417 23:45:57.606464 2971 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-149" Apr 17 23:45:57.609388 kubelet[2971]: E0417 23:45:57.606524 2971 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-149\": node \"ip-172-31-16-149\" not found" Apr 17 23:45:57.694094 kubelet[2971]: I0417 23:45:57.694055 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:45:57.707509 kubelet[2971]: E0417 23:45:57.707231 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:45:57.707509 kubelet[2971]: I0417 23:45:57.707266 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:57.711347 kubelet[2971]: E0417 23:45:57.710930 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:45:57.712637 kubelet[2971]: I0417 23:45:57.712138 2971 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:45:57.715873 kubelet[2971]: E0417 23:45:57.715829 2971 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-149\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:45:58.358201 kubelet[2971]: I0417 23:45:58.357934 2971 apiserver.go:52] "Watching apiserver" Apr 17 23:45:58.382084 kubelet[2971]: I0417 23:45:58.382031 2971 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:45:59.605521 systemd[1]: Reloading requested from client PID 3249 ('systemctl') (unit session-7.scope)... Apr 17 23:45:59.605539 systemd[1]: Reloading... Apr 17 23:45:59.730174 zram_generator::config[3292]: No configuration found. Apr 17 23:45:59.884457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:59.983189 systemd[1]: Reloading finished in 376 ms. Apr 17 23:46:00.023096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:46:00.046894 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:46:00.047675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:46:00.094370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:46:00.337232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:46:00.352616 (kubelet)[3359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:46:00.435209 kubelet[3359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:46:00.435209 kubelet[3359]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:46:00.435209 kubelet[3359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:46:00.435874 kubelet[3359]: I0417 23:46:00.435294 3359 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:46:00.448236 kubelet[3359]: I0417 23:46:00.448194 3359 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:46:00.448236 kubelet[3359]: I0417 23:46:00.448224 3359 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:46:00.448515 kubelet[3359]: I0417 23:46:00.448495 3359 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:46:00.449705 kubelet[3359]: I0417 23:46:00.449681 3359 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:46:00.465248 kubelet[3359]: I0417 23:46:00.465219 3359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:46:00.470264 kubelet[3359]: E0417 23:46:00.470214 3359 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:46:00.470264 kubelet[3359]: I0417 23:46:00.470250 3359 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:46:00.473906 kubelet[3359]: I0417 23:46:00.473864 3359 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:46:00.475608 kubelet[3359]: I0417 23:46:00.474721 3359 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:46:00.475608 kubelet[3359]: I0417 23:46:00.474756 3359 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:46:00.475608 kubelet[3359]: I0417 23:46:00.475161 3359 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:46:00.475608 kubelet[3359]: I0417 23:46:00.475176 3359 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:46:00.475608 kubelet[3359]: I0417 23:46:00.475240 3359 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:46:00.475968 kubelet[3359]: I0417 23:46:00.475516 3359 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:46:00.475968 kubelet[3359]: I0417 23:46:00.475548 3359 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:46:00.479560 kubelet[3359]: I0417 23:46:00.479528 3359 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:46:00.480495 kubelet[3359]: I0417 23:46:00.479821 3359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:46:00.482942 kubelet[3359]: I0417 23:46:00.482920 3359 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:46:00.483889 kubelet[3359]: I0417 23:46:00.483869 3359 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:46:00.503214 kubelet[3359]: I0417 23:46:00.502609 3359 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:46:00.503214 kubelet[3359]: I0417 23:46:00.502652 3359 server.go:1289] "Started kubelet" Apr 17 23:46:00.516491 kubelet[3359]: I0417 23:46:00.516458 3359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:46:00.524145 kubelet[3359]: I0417 23:46:00.524105 3359 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:46:00.526886 kubelet[3359]: I0417 23:46:00.525283 3359 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:46:00.535789 kubelet[3359]: I0417 23:46:00.535713 3359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:46:00.536020 kubelet[3359]: I0417 23:46:00.535981 3359 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:46:00.536311 kubelet[3359]: I0417 23:46:00.536287 3359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:46:00.540354 kubelet[3359]: I0417 23:46:00.538923 3359 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:46:00.540354 kubelet[3359]: I0417 23:46:00.539859 3359 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:46:00.540354 kubelet[3359]: I0417 23:46:00.539985 3359 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:46:00.546606 kubelet[3359]: I0417 23:46:00.541472 3359 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:46:00.546606 kubelet[3359]: I0417 23:46:00.541602 3359 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:46:00.550378 kubelet[3359]: I0417 23:46:00.550337 3359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:46:00.554945 kubelet[3359]: I0417 23:46:00.554765 3359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:46:00.554945 kubelet[3359]: I0417 23:46:00.554793 3359 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:46:00.554945 kubelet[3359]: I0417 23:46:00.554817 3359 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:46:00.554945 kubelet[3359]: I0417 23:46:00.554826 3359 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:46:00.554945 kubelet[3359]: E0417 23:46:00.554874 3359 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:46:00.559220 kubelet[3359]: E0417 23:46:00.559177 3359 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:46:00.561248 kubelet[3359]: I0417 23:46:00.561227 3359 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.635885 3359 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.635904 3359 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.635925 3359 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.636982 3359 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.637031 3359 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.637061 3359 policy_none.go:49] "None policy: Start" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.637075 3359 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.637090 3359 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:46:00.637968 kubelet[3359]: I0417 23:46:00.637217 3359 state_mem.go:75] "Updated machine memory state" Apr 17 23:46:00.639871 kubelet[3359]: E0417 23:46:00.639642 3359 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:46:00.641312 kubelet[3359]: I0417 23:46:00.641089 3359 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:46:00.641312 kubelet[3359]: I0417 23:46:00.641121 3359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:46:00.641603 kubelet[3359]: I0417 23:46:00.641590 3359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:46:00.644052 kubelet[3359]: E0417 23:46:00.643954 3359 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:46:00.660287 kubelet[3359]: I0417 23:46:00.660260 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.660776 kubelet[3359]: I0417 23:46:00.660154 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:46:00.663129 kubelet[3359]: I0417 23:46:00.663105 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:00.742099 kubelet[3359]: I0417 23:46:00.742038 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:00.742099 kubelet[3359]: I0417 23:46:00.742093 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.742306 kubelet[3359]: I0417 23:46:00.742123 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.742306 kubelet[3359]: I0417 23:46:00.742159 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.742306 kubelet[3359]: I0417 23:46:00.742182 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c423c1b470255d46d3c8d7b0929d5082-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-149\" (UID: \"c423c1b470255d46d3c8d7b0929d5082\") " pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:46:00.742306 kubelet[3359]: I0417 23:46:00.742216 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-ca-certs\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:00.742306 kubelet[3359]: I0417 23:46:00.742238 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/718491c74db8c0338a4a39a0ea7c4535-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-149\" (UID: \"718491c74db8c0338a4a39a0ea7c4535\") " pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:00.742480 kubelet[3359]: I0417 23:46:00.742258 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.742480 kubelet[3359]: I0417 23:46:00.742286 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ac938fe675be16203ae7be6208a3001-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-149\" (UID: \"0ac938fe675be16203ae7be6208a3001\") " pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:00.751169 kubelet[3359]: I0417 23:46:00.751134 3359 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-149" Apr 17 23:46:00.760320 kubelet[3359]: I0417 23:46:00.760138 3359 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-149" Apr 17 23:46:00.760320 kubelet[3359]: I0417 23:46:00.760226 3359 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-149" Apr 17 23:46:01.482909 kubelet[3359]: I0417 23:46:01.482855 3359 apiserver.go:52] "Watching apiserver" Apr 17 23:46:01.541047 kubelet[3359]: I0417 23:46:01.540961 3359 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:46:01.604523 kubelet[3359]: I0417 23:46:01.604421 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:01.606592 kubelet[3359]: I0417 23:46:01.606560 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:01.628186 kubelet[3359]: I0417 23:46:01.622204 3359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:46:01.633058 kubelet[3359]: E0417 23:46:01.633019 3359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-149\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-149" Apr 17 23:46:01.644335 kubelet[3359]: E0417 23:46:01.643323 3359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-149\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-149" Apr 17 23:46:01.649237 kubelet[3359]: E0417 23:46:01.643827 3359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-149\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-149" Apr 17 23:46:01.704408 kubelet[3359]: I0417 23:46:01.704335 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-149" podStartSLOduration=1.7043085150000001 podStartE2EDuration="1.704308515s" podCreationTimestamp="2026-04-17 23:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:01.700690866 +0000 UTC m=+1.337878546" watchObservedRunningTime="2026-04-17 23:46:01.704308515 +0000 UTC m=+1.341496194" Apr 17 23:46:01.731813 kubelet[3359]: I0417 23:46:01.731639 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-149" podStartSLOduration=1.731614359 podStartE2EDuration="1.731614359s" podCreationTimestamp="2026-04-17 23:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:01.715963944 +0000 UTC m=+1.353151619" watchObservedRunningTime="2026-04-17 23:46:01.731614359 +0000 UTC m=+1.368802045" Apr 17 23:46:03.604558 kubelet[3359]: I0417 23:46:03.604516 3359 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:46:03.605306 kubelet[3359]: I0417 23:46:03.605171 3359 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:46:03.605387 containerd[2098]: time="2026-04-17T23:46:03.604908250Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:46:04.049200 update_engine[2072]: I20260417 23:46:04.049109 2072 update_attempter.cc:509] Updating boot flags... Apr 17 23:46:04.154027 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3420) Apr 17 23:46:04.377153 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3420) Apr 17 23:46:04.399638 kubelet[3359]: I0417 23:46:04.397987 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-149" podStartSLOduration=4.397964412 podStartE2EDuration="4.397964412s" podCreationTimestamp="2026-04-17 23:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:01.732070785 +0000 UTC m=+1.369258466" watchObservedRunningTime="2026-04-17 23:46:04.397964412 +0000 UTC m=+4.035152091" Apr 17 23:46:04.598655 kubelet[3359]: I0417 23:46:04.597898 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73c26b48-de53-41ac-8f0b-f4c7107b22a7-xtables-lock\") pod \"kube-proxy-kr2mc\" (UID: \"73c26b48-de53-41ac-8f0b-f4c7107b22a7\") " pod="kube-system/kube-proxy-kr2mc" Apr 17 23:46:04.598655 kubelet[3359]: I0417 23:46:04.597944 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73c26b48-de53-41ac-8f0b-f4c7107b22a7-lib-modules\") pod \"kube-proxy-kr2mc\" (UID: \"73c26b48-de53-41ac-8f0b-f4c7107b22a7\") " pod="kube-system/kube-proxy-kr2mc" Apr 17 23:46:04.598655 kubelet[3359]: I0417 23:46:04.597971 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzl8c\" (UniqueName: \"kubernetes.io/projected/73c26b48-de53-41ac-8f0b-f4c7107b22a7-kube-api-access-bzl8c\") pod \"kube-proxy-kr2mc\" (UID: \"73c26b48-de53-41ac-8f0b-f4c7107b22a7\") " pod="kube-system/kube-proxy-kr2mc" Apr 17 23:46:04.598655 kubelet[3359]: I0417 23:46:04.598018 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73c26b48-de53-41ac-8f0b-f4c7107b22a7-kube-proxy\") pod \"kube-proxy-kr2mc\" (UID: \"73c26b48-de53-41ac-8f0b-f4c7107b22a7\") " pod="kube-system/kube-proxy-kr2mc" Apr 17 23:46:04.620095 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3420) Apr 17 23:46:04.725466 kubelet[3359]: E0417 23:46:04.725360 3359 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 17 23:46:04.725466 kubelet[3359]: E0417 23:46:04.725415 3359 projected.go:194] Error preparing data for projected volume kube-api-access-bzl8c for pod kube-system/kube-proxy-kr2mc: configmap "kube-root-ca.crt" not found Apr 17 23:46:04.727779 kubelet[3359]: E0417 23:46:04.727060 3359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73c26b48-de53-41ac-8f0b-f4c7107b22a7-kube-api-access-bzl8c podName:73c26b48-de53-41ac-8f0b-f4c7107b22a7 nodeName:}" failed. No retries permitted until 2026-04-17 23:46:05.225499915 +0000 UTC m=+4.862687587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bzl8c" (UniqueName: "kubernetes.io/projected/73c26b48-de53-41ac-8f0b-f4c7107b22a7-kube-api-access-bzl8c") pod "kube-proxy-kr2mc" (UID: "73c26b48-de53-41ac-8f0b-f4c7107b22a7") : configmap "kube-root-ca.crt" not found Apr 17 23:46:05.100573 kubelet[3359]: I0417 23:46:05.100386 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb5d1246-81ab-4ab5-93d6-7355b319240e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-6cl24\" (UID: \"eb5d1246-81ab-4ab5-93d6-7355b319240e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6cl24" Apr 17 23:46:05.100573 kubelet[3359]: I0417 23:46:05.100512 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r765v\" (UniqueName: \"kubernetes.io/projected/eb5d1246-81ab-4ab5-93d6-7355b319240e-kube-api-access-r765v\") pod \"tigera-operator-6bf85f8dd-6cl24\" (UID: \"eb5d1246-81ab-4ab5-93d6-7355b319240e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6cl24" Apr 17 23:46:05.223432 containerd[2098]: time="2026-04-17T23:46:05.223392414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6cl24,Uid:eb5d1246-81ab-4ab5-93d6-7355b319240e,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:46:05.260740 containerd[2098]: time="2026-04-17T23:46:05.260165485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:05.260740 containerd[2098]: time="2026-04-17T23:46:05.260231972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:05.260740 containerd[2098]: time="2026-04-17T23:46:05.260251787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:05.260740 containerd[2098]: time="2026-04-17T23:46:05.260380264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:05.291255 systemd[1]: run-containerd-runc-k8s.io-5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66-runc.a5kcLX.mount: Deactivated successfully. Apr 17 23:46:05.326850 containerd[2098]: time="2026-04-17T23:46:05.326809291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr2mc,Uid:73c26b48-de53-41ac-8f0b-f4c7107b22a7,Namespace:kube-system,Attempt:0,}" Apr 17 23:46:05.345888 containerd[2098]: time="2026-04-17T23:46:05.345785671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6cl24,Uid:eb5d1246-81ab-4ab5-93d6-7355b319240e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66\"" Apr 17 23:46:05.348194 containerd[2098]: time="2026-04-17T23:46:05.347987388Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:46:05.378026 containerd[2098]: time="2026-04-17T23:46:05.377795196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:05.378026 containerd[2098]: time="2026-04-17T23:46:05.377869051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:05.379183 containerd[2098]: time="2026-04-17T23:46:05.377884788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:05.379312 containerd[2098]: time="2026-04-17T23:46:05.379154966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:05.428250 containerd[2098]: time="2026-04-17T23:46:05.428137065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr2mc,Uid:73c26b48-de53-41ac-8f0b-f4c7107b22a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a39d704ffcb03498e468e118d6e553ee2e69dc138d337162b8cd4eb51e7884c\"" Apr 17 23:46:05.437831 containerd[2098]: time="2026-04-17T23:46:05.437777952Z" level=info msg="CreateContainer within sandbox \"9a39d704ffcb03498e468e118d6e553ee2e69dc138d337162b8cd4eb51e7884c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:46:05.475643 containerd[2098]: time="2026-04-17T23:46:05.475594554Z" level=info msg="CreateContainer within sandbox \"9a39d704ffcb03498e468e118d6e553ee2e69dc138d337162b8cd4eb51e7884c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1019bd7f2c4366b12db05737002617b1fde1f93dd681fdf6463ba379c1b02227\"" Apr 17 23:46:05.477042 containerd[2098]: time="2026-04-17T23:46:05.476778786Z" level=info msg="StartContainer for \"1019bd7f2c4366b12db05737002617b1fde1f93dd681fdf6463ba379c1b02227\"" Apr 17 23:46:05.542322 containerd[2098]: time="2026-04-17T23:46:05.542254581Z" level=info msg="StartContainer for \"1019bd7f2c4366b12db05737002617b1fde1f93dd681fdf6463ba379c1b02227\" returns successfully" Apr 17 23:46:05.652074 kubelet[3359]: I0417 23:46:05.651477 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kr2mc" podStartSLOduration=1.6514553589999998 podStartE2EDuration="1.651455359s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:05.638061508 +0000 UTC m=+5.275249184" watchObservedRunningTime="2026-04-17 23:46:05.651455359 +0000 UTC m=+5.288643039" Apr 17 23:46:06.471573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857556463.mount: Deactivated successfully. Apr 17 23:46:07.786629 containerd[2098]: time="2026-04-17T23:46:07.786571053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:07.788992 containerd[2098]: time="2026-04-17T23:46:07.788814017Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:46:07.791617 containerd[2098]: time="2026-04-17T23:46:07.791311028Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:07.795287 containerd[2098]: time="2026-04-17T23:46:07.795239871Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:07.796715 containerd[2098]: time="2026-04-17T23:46:07.796217778Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.448155839s" Apr 17 23:46:07.796715 containerd[2098]: time="2026-04-17T23:46:07.796263658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:46:07.803926 containerd[2098]: time="2026-04-17T23:46:07.803866012Z" level=info msg="CreateContainer within sandbox \"5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:46:07.832769 containerd[2098]: time="2026-04-17T23:46:07.832724404Z" level=info msg="CreateContainer within sandbox \"5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b\"" Apr 17 23:46:07.833557 containerd[2098]: time="2026-04-17T23:46:07.833511214Z" level=info msg="StartContainer for \"213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b\"" Apr 17 23:46:07.899066 containerd[2098]: time="2026-04-17T23:46:07.899020335Z" level=info msg="StartContainer for \"213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b\" returns successfully" Apr 17 23:46:09.381024 kubelet[3359]: I0417 23:46:09.379411 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-6cl24" podStartSLOduration=2.929329017 podStartE2EDuration="5.379388533s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="2026-04-17 23:46:05.347633636 +0000 UTC m=+4.984821299" lastFinishedPulling="2026-04-17 23:46:07.797693156 +0000 UTC m=+7.434880815" observedRunningTime="2026-04-17 23:46:08.653664781 +0000 UTC m=+8.290852460" watchObservedRunningTime="2026-04-17 23:46:09.379388533 +0000 UTC m=+9.016576212" Apr 17 23:46:12.892305 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:12.893073 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:12.894063 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:15.312837 sudo[2456]: pam_unix(sudo:session): session closed for user root Apr 17 23:46:15.474840 sshd[2449]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:15.481884 systemd[1]: sshd@6-172.31.16.149:22-20.229.252.112:33742.service: Deactivated successfully. Apr 17 23:46:15.501262 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:46:15.501347 systemd-logind[2068]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:46:15.516151 systemd-logind[2068]: Removed session 7. Apr 17 23:46:18.714505 kubelet[3359]: I0417 23:46:18.714326 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efb707f7-1d79-4d0e-84ca-ae6fc4485eef-tigera-ca-bundle\") pod \"calico-typha-74f94d6cf6-xxmjs\" (UID: \"efb707f7-1d79-4d0e-84ca-ae6fc4485eef\") " pod="calico-system/calico-typha-74f94d6cf6-xxmjs" Apr 17 23:46:18.714505 kubelet[3359]: I0417 23:46:18.714391 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/efb707f7-1d79-4d0e-84ca-ae6fc4485eef-typha-certs\") pod \"calico-typha-74f94d6cf6-xxmjs\" (UID: \"efb707f7-1d79-4d0e-84ca-ae6fc4485eef\") " pod="calico-system/calico-typha-74f94d6cf6-xxmjs" Apr 17 23:46:18.714505 kubelet[3359]: I0417 23:46:18.714427 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnvwt\" (UniqueName: \"kubernetes.io/projected/efb707f7-1d79-4d0e-84ca-ae6fc4485eef-kube-api-access-nnvwt\") pod \"calico-typha-74f94d6cf6-xxmjs\" (UID: \"efb707f7-1d79-4d0e-84ca-ae6fc4485eef\") " pod="calico-system/calico-typha-74f94d6cf6-xxmjs" Apr 17 23:46:18.916346 kubelet[3359]: I0417 23:46:18.916305 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-cni-log-dir\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916346 kubelet[3359]: I0417 23:46:18.916348 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-var-lib-calico\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916544 kubelet[3359]: I0417 23:46:18.916389 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-var-run-calico\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916544 kubelet[3359]: I0417 23:46:18.916409 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-xtables-lock\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916544 kubelet[3359]: I0417 23:46:18.916430 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-cni-bin-dir\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916544 kubelet[3359]: I0417 23:46:18.916449 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-cni-net-dir\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916544 kubelet[3359]: I0417 23:46:18.916471 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-nodeproc\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916777 kubelet[3359]: I0417 23:46:18.916492 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bfbcfe-3113-479f-8340-3409f33153da-tigera-ca-bundle\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916777 kubelet[3359]: I0417 23:46:18.916517 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-flexvol-driver-host\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916777 kubelet[3359]: I0417 23:46:18.916544 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00bfbcfe-3113-479f-8340-3409f33153da-node-certs\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916777 kubelet[3359]: I0417 23:46:18.916571 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-bpffs\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916777 kubelet[3359]: I0417 23:46:18.916641 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-sys-fs\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916984 kubelet[3359]: I0417 23:46:18.916665 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htpp\" (UniqueName: \"kubernetes.io/projected/00bfbcfe-3113-479f-8340-3409f33153da-kube-api-access-6htpp\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916984 kubelet[3359]: I0417 23:46:18.916693 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-lib-modules\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.916984 kubelet[3359]: I0417 23:46:18.916723 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00bfbcfe-3113-479f-8340-3409f33153da-policysync\") pod \"calico-node-jscr6\" (UID: \"00bfbcfe-3113-479f-8340-3409f33153da\") " pod="calico-system/calico-node-jscr6" Apr 17 23:46:18.919963 containerd[2098]: time="2026-04-17T23:46:18.918987920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f94d6cf6-xxmjs,Uid:efb707f7-1d79-4d0e-84ca-ae6fc4485eef,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:18.990336 kubelet[3359]: E0417 23:46:18.985990 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:19.019176 kubelet[3359]: I0417 23:46:19.017219 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15f67ed1-2981-42fd-8b37-94a71c9f9349-registration-dir\") pod \"csi-node-driver-ch9lf\" (UID: \"15f67ed1-2981-42fd-8b37-94a71c9f9349\") " pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:19.019176 kubelet[3359]: I0417 23:46:19.017267 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15f67ed1-2981-42fd-8b37-94a71c9f9349-socket-dir\") pod \"csi-node-driver-ch9lf\" (UID: \"15f67ed1-2981-42fd-8b37-94a71c9f9349\") " pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:19.019176 kubelet[3359]: I0417 23:46:19.017397 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15f67ed1-2981-42fd-8b37-94a71c9f9349-varrun\") pod \"csi-node-driver-ch9lf\" (UID: \"15f67ed1-2981-42fd-8b37-94a71c9f9349\") " pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:19.019176 kubelet[3359]: I0417 23:46:19.017450 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8ps\" (UniqueName: \"kubernetes.io/projected/15f67ed1-2981-42fd-8b37-94a71c9f9349-kube-api-access-9w8ps\") pod \"csi-node-driver-ch9lf\" (UID: \"15f67ed1-2981-42fd-8b37-94a71c9f9349\") " pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:19.019176 kubelet[3359]: I0417 23:46:19.017511 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15f67ed1-2981-42fd-8b37-94a71c9f9349-kubelet-dir\") pod \"csi-node-driver-ch9lf\" (UID: \"15f67ed1-2981-42fd-8b37-94a71c9f9349\") " pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:19.036725 kubelet[3359]: E0417 23:46:19.036692 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.037543 kubelet[3359]: W0417 23:46:19.037169 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.037543 kubelet[3359]: E0417 23:46:19.037210 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.040326 kubelet[3359]: E0417 23:46:19.040049 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.040326 kubelet[3359]: W0417 23:46:19.040075 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.040326 kubelet[3359]: E0417 23:46:19.040100 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.041182 kubelet[3359]: E0417 23:46:19.040610 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.041182 kubelet[3359]: W0417 23:46:19.040624 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.041182 kubelet[3359]: E0417 23:46:19.040642 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.041863 kubelet[3359]: E0417 23:46:19.041846 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.043629 kubelet[3359]: W0417 23:46:19.043419 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.043629 kubelet[3359]: E0417 23:46:19.043460 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.045379 kubelet[3359]: E0417 23:46:19.045105 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.045379 kubelet[3359]: W0417 23:46:19.045131 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.045379 kubelet[3359]: E0417 23:46:19.045151 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.045624 kubelet[3359]: E0417 23:46:19.045613 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.046188 kubelet[3359]: W0417 23:46:19.046030 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.046188 kubelet[3359]: E0417 23:46:19.046058 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.049056 kubelet[3359]: E0417 23:46:19.047093 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.049056 kubelet[3359]: W0417 23:46:19.047109 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.049056 kubelet[3359]: E0417 23:46:19.047124 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.049487 kubelet[3359]: E0417 23:46:19.049470 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.049927 kubelet[3359]: W0417 23:46:19.049907 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.050134 kubelet[3359]: E0417 23:46:19.050116 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.056021 kubelet[3359]: E0417 23:46:19.054101 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.056021 kubelet[3359]: W0417 23:46:19.054126 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.056021 kubelet[3359]: E0417 23:46:19.054149 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.056899 kubelet[3359]: E0417 23:46:19.056877 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.058670 kubelet[3359]: W0417 23:46:19.058471 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.058670 kubelet[3359]: E0417 23:46:19.058507 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.061101 kubelet[3359]: E0417 23:46:19.059404 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.061101 kubelet[3359]: W0417 23:46:19.059421 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.061101 kubelet[3359]: E0417 23:46:19.059442 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.062408 kubelet[3359]: E0417 23:46:19.062120 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.062408 kubelet[3359]: W0417 23:46:19.062139 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.062408 kubelet[3359]: E0417 23:46:19.062161 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.063360 kubelet[3359]: E0417 23:46:19.063072 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.063360 kubelet[3359]: W0417 23:46:19.063091 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.063360 kubelet[3359]: E0417 23:46:19.063112 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.064650 kubelet[3359]: E0417 23:46:19.064325 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.064650 kubelet[3359]: W0417 23:46:19.064350 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.064650 kubelet[3359]: E0417 23:46:19.064378 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.065716 kubelet[3359]: E0417 23:46:19.065584 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.065716 kubelet[3359]: W0417 23:46:19.065602 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.065716 kubelet[3359]: E0417 23:46:19.065621 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.067040 kubelet[3359]: E0417 23:46:19.066769 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.067040 kubelet[3359]: W0417 23:46:19.066785 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.067040 kubelet[3359]: E0417 23:46:19.066803 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.068085 kubelet[3359]: E0417 23:46:19.067760 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.068085 kubelet[3359]: W0417 23:46:19.067937 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.068085 kubelet[3359]: E0417 23:46:19.067959 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.069725 kubelet[3359]: E0417 23:46:19.069347 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.069725 kubelet[3359]: W0417 23:46:19.069362 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.069725 kubelet[3359]: E0417 23:46:19.069379 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.070840 kubelet[3359]: E0417 23:46:19.070221 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.070840 kubelet[3359]: W0417 23:46:19.070233 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.070840 kubelet[3359]: E0417 23:46:19.070248 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.073141 kubelet[3359]: E0417 23:46:19.071961 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.073141 kubelet[3359]: W0417 23:46:19.071977 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.073141 kubelet[3359]: E0417 23:46:19.071993 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.073374 kubelet[3359]: E0417 23:46:19.073362 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.073473 kubelet[3359]: W0417 23:46:19.073433 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.073473 kubelet[3359]: E0417 23:46:19.073457 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.074192 kubelet[3359]: E0417 23:46:19.074119 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.074192 kubelet[3359]: W0417 23:46:19.074134 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.074192 kubelet[3359]: E0417 23:46:19.074148 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.076277 kubelet[3359]: E0417 23:46:19.075819 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.076277 kubelet[3359]: W0417 23:46:19.075835 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.076277 kubelet[3359]: E0417 23:46:19.075851 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.076277 kubelet[3359]: E0417 23:46:19.076175 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.076277 kubelet[3359]: W0417 23:46:19.076187 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.076277 kubelet[3359]: E0417 23:46:19.076200 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.077448 kubelet[3359]: E0417 23:46:19.077129 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.077701 kubelet[3359]: W0417 23:46:19.077550 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.077701 kubelet[3359]: E0417 23:46:19.077572 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.078383 kubelet[3359]: E0417 23:46:19.077968 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.078383 kubelet[3359]: W0417 23:46:19.077981 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.078383 kubelet[3359]: E0417 23:46:19.077995 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.078533 containerd[2098]: time="2026-04-17T23:46:19.077166873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:19.078533 containerd[2098]: time="2026-04-17T23:46:19.077253640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:19.078533 containerd[2098]: time="2026-04-17T23:46:19.077274841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.078533 containerd[2098]: time="2026-04-17T23:46:19.077394770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.079881 kubelet[3359]: E0417 23:46:19.079267 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.079881 kubelet[3359]: W0417 23:46:19.079288 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.079881 kubelet[3359]: E0417 23:46:19.079304 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.080869 kubelet[3359]: E0417 23:46:19.080602 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.080869 kubelet[3359]: W0417 23:46:19.080617 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.080869 kubelet[3359]: E0417 23:46:19.080632 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.081499 kubelet[3359]: E0417 23:46:19.081374 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.081499 kubelet[3359]: W0417 23:46:19.081389 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.081499 kubelet[3359]: E0417 23:46:19.081404 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.083590 kubelet[3359]: E0417 23:46:19.082167 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.083590 kubelet[3359]: W0417 23:46:19.082181 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.083590 kubelet[3359]: E0417 23:46:19.082197 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.083590 kubelet[3359]: E0417 23:46:19.083410 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.083590 kubelet[3359]: W0417 23:46:19.083422 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.083590 kubelet[3359]: E0417 23:46:19.083437 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.092509 kubelet[3359]: E0417 23:46:19.092063 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.092509 kubelet[3359]: W0417 23:46:19.092094 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.092509 kubelet[3359]: E0417 23:46:19.092121 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.093127 kubelet[3359]: E0417 23:46:19.093110 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.093263 kubelet[3359]: W0417 23:46:19.093207 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.093263 kubelet[3359]: E0417 23:46:19.093229 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.120042 kubelet[3359]: E0417 23:46:19.119339 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.120042 kubelet[3359]: W0417 23:46:19.119365 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.120042 kubelet[3359]: E0417 23:46:19.119397 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.120042 kubelet[3359]: E0417 23:46:19.119713 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.120042 kubelet[3359]: W0417 23:46:19.119725 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.120042 kubelet[3359]: E0417 23:46:19.119738 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.121911 kubelet[3359]: E0417 23:46:19.121514 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.121911 kubelet[3359]: W0417 23:46:19.121531 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.121911 kubelet[3359]: E0417 23:46:19.121548 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.122878 kubelet[3359]: E0417 23:46:19.122495 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.122878 kubelet[3359]: W0417 23:46:19.122511 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.122878 kubelet[3359]: E0417 23:46:19.122526 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.124900 kubelet[3359]: E0417 23:46:19.123646 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.124900 kubelet[3359]: W0417 23:46:19.124398 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.124900 kubelet[3359]: E0417 23:46:19.124421 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.127650 kubelet[3359]: E0417 23:46:19.127205 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.127650 kubelet[3359]: W0417 23:46:19.127223 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.127650 kubelet[3359]: E0417 23:46:19.127242 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.127650 kubelet[3359]: E0417 23:46:19.127601 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.127650 kubelet[3359]: W0417 23:46:19.127615 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.127650 kubelet[3359]: E0417 23:46:19.127630 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.129800 kubelet[3359]: E0417 23:46:19.128415 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.129800 kubelet[3359]: W0417 23:46:19.128432 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.129800 kubelet[3359]: E0417 23:46:19.128447 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.132786 kubelet[3359]: E0417 23:46:19.132766 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.133434 kubelet[3359]: W0417 23:46:19.132886 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.133434 kubelet[3359]: E0417 23:46:19.132913 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.137480 kubelet[3359]: E0417 23:46:19.137400 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.137480 kubelet[3359]: W0417 23:46:19.137423 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.137480 kubelet[3359]: E0417 23:46:19.137450 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.138435 kubelet[3359]: E0417 23:46:19.138404 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.138435 kubelet[3359]: W0417 23:46:19.138426 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.140144 kubelet[3359]: E0417 23:46:19.138449 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.142082 kubelet[3359]: E0417 23:46:19.141703 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.142285 kubelet[3359]: W0417 23:46:19.141728 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.142476 kubelet[3359]: E0417 23:46:19.142300 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.144341 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.145538 kubelet[3359]: W0417 23:46:19.144460 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.144487 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.144844 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.145538 kubelet[3359]: W0417 23:46:19.144856 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.144869 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.145270 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.145538 kubelet[3359]: W0417 23:46:19.145282 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.145538 kubelet[3359]: E0417 23:46:19.145296 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.146623 kubelet[3359]: E0417 23:46:19.145638 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.146623 kubelet[3359]: W0417 23:46:19.145665 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.146623 kubelet[3359]: E0417 23:46:19.145678 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.146623 kubelet[3359]: E0417 23:46:19.146199 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.146623 kubelet[3359]: W0417 23:46:19.146210 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.146623 kubelet[3359]: E0417 23:46:19.146224 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.147408 kubelet[3359]: E0417 23:46:19.146959 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.147408 kubelet[3359]: W0417 23:46:19.146976 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.147408 kubelet[3359]: E0417 23:46:19.146989 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.147408 kubelet[3359]: E0417 23:46:19.147340 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.147408 kubelet[3359]: W0417 23:46:19.147352 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.147408 kubelet[3359]: E0417 23:46:19.147367 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.149407 kubelet[3359]: E0417 23:46:19.149382 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.149407 kubelet[3359]: W0417 23:46:19.149399 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.150173 kubelet[3359]: E0417 23:46:19.149414 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.151150 kubelet[3359]: E0417 23:46:19.151124 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.151150 kubelet[3359]: W0417 23:46:19.151141 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.151647 kubelet[3359]: E0417 23:46:19.151157 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.155310 kubelet[3359]: E0417 23:46:19.155247 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.155310 kubelet[3359]: W0417 23:46:19.155267 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.155310 kubelet[3359]: E0417 23:46:19.155287 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.157115 kubelet[3359]: E0417 23:46:19.157097 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.157115 kubelet[3359]: W0417 23:46:19.157114 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.157247 kubelet[3359]: E0417 23:46:19.157133 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.157407 kubelet[3359]: E0417 23:46:19.157392 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.157407 kubelet[3359]: W0417 23:46:19.157408 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.157540 kubelet[3359]: E0417 23:46:19.157421 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.157774 kubelet[3359]: E0417 23:46:19.157751 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.157774 kubelet[3359]: W0417 23:46:19.157767 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.157883 kubelet[3359]: E0417 23:46:19.157780 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.165729 kubelet[3359]: E0417 23:46:19.165699 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.165729 kubelet[3359]: W0417 23:46:19.165724 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.165939 kubelet[3359]: E0417 23:46:19.165748 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.169774 kubelet[3359]: E0417 23:46:19.169742 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.169774 kubelet[3359]: W0417 23:46:19.169769 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.169951 kubelet[3359]: E0417 23:46:19.169792 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.199473 containerd[2098]: time="2026-04-17T23:46:19.199429125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jscr6,Uid:00bfbcfe-3113-479f-8340-3409f33153da,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:19.231182 containerd[2098]: time="2026-04-17T23:46:19.231023788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f94d6cf6-xxmjs,Uid:efb707f7-1d79-4d0e-84ca-ae6fc4485eef,Namespace:calico-system,Attempt:0,} returns sandbox id \"b80ca5519d3875ff12f5b1ce4a8db5c0ca494f4e27221c03f97bad2ad19ed98a\"" Apr 17 23:46:19.234234 containerd[2098]: time="2026-04-17T23:46:19.234155916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:46:19.245503 containerd[2098]: time="2026-04-17T23:46:19.245325239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:19.245503 containerd[2098]: time="2026-04-17T23:46:19.245395565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:19.246969 containerd[2098]: time="2026-04-17T23:46:19.245416852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.246969 containerd[2098]: time="2026-04-17T23:46:19.245538553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.305521 containerd[2098]: time="2026-04-17T23:46:19.305476736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jscr6,Uid:00bfbcfe-3113-479f-8340-3409f33153da,Namespace:calico-system,Attempt:0,} returns sandbox id \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\"" Apr 17 23:46:20.558365 kubelet[3359]: E0417 23:46:20.556685 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:20.621199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226215559.mount: Deactivated successfully. Apr 17 23:46:21.638307 containerd[2098]: time="2026-04-17T23:46:21.638250401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:21.639662 containerd[2098]: time="2026-04-17T23:46:21.639608109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:46:21.642040 containerd[2098]: time="2026-04-17T23:46:21.640593019Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:21.643926 containerd[2098]: time="2026-04-17T23:46:21.643881757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:21.645051 containerd[2098]: time="2026-04-17T23:46:21.644989072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.410790348s" Apr 17 23:46:21.645197 containerd[2098]: time="2026-04-17T23:46:21.645176648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:46:21.646503 containerd[2098]: time="2026-04-17T23:46:21.646476752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:46:21.673603 containerd[2098]: time="2026-04-17T23:46:21.673563262Z" level=info msg="CreateContainer within sandbox \"b80ca5519d3875ff12f5b1ce4a8db5c0ca494f4e27221c03f97bad2ad19ed98a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:46:21.711975 containerd[2098]: time="2026-04-17T23:46:21.711928648Z" level=info msg="CreateContainer within sandbox \"b80ca5519d3875ff12f5b1ce4a8db5c0ca494f4e27221c03f97bad2ad19ed98a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"104e787d769ca013c72a8f198b99cfe9d913b1d34540d4b4464dde77bcadac19\"" Apr 17 23:46:21.713520 containerd[2098]: time="2026-04-17T23:46:21.713108515Z" level=info msg="StartContainer for \"104e787d769ca013c72a8f198b99cfe9d913b1d34540d4b4464dde77bcadac19\"" Apr 17 23:46:21.802349 containerd[2098]: time="2026-04-17T23:46:21.802304380Z" level=info msg="StartContainer for \"104e787d769ca013c72a8f198b99cfe9d913b1d34540d4b4464dde77bcadac19\" returns successfully" Apr 17 23:46:22.565889 kubelet[3359]: E0417 23:46:22.565261 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:22.728669 kubelet[3359]: I0417 23:46:22.726766 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74f94d6cf6-xxmjs" podStartSLOduration=2.310823368 podStartE2EDuration="4.723426933s" podCreationTimestamp="2026-04-17 23:46:18 +0000 UTC" firstStartedPulling="2026-04-17 23:46:19.233408323 +0000 UTC m=+18.870595989" lastFinishedPulling="2026-04-17 23:46:21.646011884 +0000 UTC m=+21.283199554" observedRunningTime="2026-04-17 23:46:22.710299633 +0000 UTC m=+22.347487312" watchObservedRunningTime="2026-04-17 23:46:22.723426933 +0000 UTC m=+22.360614628" Apr 17 23:46:22.729545 kubelet[3359]: E0417 23:46:22.729309 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.729545 kubelet[3359]: W0417 23:46:22.729332 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.729545 kubelet[3359]: E0417 23:46:22.729356 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.730045 kubelet[3359]: E0417 23:46:22.729822 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.730045 kubelet[3359]: W0417 23:46:22.729837 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.730045 kubelet[3359]: E0417 23:46:22.729853 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.730417 kubelet[3359]: E0417 23:46:22.730256 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.730417 kubelet[3359]: W0417 23:46:22.730270 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.730417 kubelet[3359]: E0417 23:46:22.730283 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.730885 kubelet[3359]: E0417 23:46:22.730713 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.730885 kubelet[3359]: W0417 23:46:22.730726 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.730885 kubelet[3359]: E0417 23:46:22.730740 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.731294 kubelet[3359]: E0417 23:46:22.731193 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.731294 kubelet[3359]: W0417 23:46:22.731206 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.731294 kubelet[3359]: E0417 23:46:22.731219 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.731694 kubelet[3359]: E0417 23:46:22.731589 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.731694 kubelet[3359]: W0417 23:46:22.731601 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.731694 kubelet[3359]: E0417 23:46:22.731615 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.732256 kubelet[3359]: E0417 23:46:22.731976 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.732256 kubelet[3359]: W0417 23:46:22.731990 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.732256 kubelet[3359]: E0417 23:46:22.732041 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.732673 kubelet[3359]: E0417 23:46:22.732556 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.732673 kubelet[3359]: W0417 23:46:22.732569 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.732673 kubelet[3359]: E0417 23:46:22.732581 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.733094 kubelet[3359]: E0417 23:46:22.733021 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.733094 kubelet[3359]: W0417 23:46:22.733035 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.733094 kubelet[3359]: E0417 23:46:22.733048 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.733649 kubelet[3359]: E0417 23:46:22.733450 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.733649 kubelet[3359]: W0417 23:46:22.733464 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.733649 kubelet[3359]: E0417 23:46:22.733477 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.735576 kubelet[3359]: E0417 23:46:22.733848 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.735576 kubelet[3359]: W0417 23:46:22.733861 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.735576 kubelet[3359]: E0417 23:46:22.733874 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.736271 kubelet[3359]: E0417 23:46:22.736029 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.736271 kubelet[3359]: W0417 23:46:22.736049 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.736271 kubelet[3359]: E0417 23:46:22.736064 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.737400 kubelet[3359]: E0417 23:46:22.736503 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.737400 kubelet[3359]: W0417 23:46:22.736517 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.737400 kubelet[3359]: E0417 23:46:22.736531 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.738237 kubelet[3359]: E0417 23:46:22.737778 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.738237 kubelet[3359]: W0417 23:46:22.737793 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.738237 kubelet[3359]: E0417 23:46:22.737808 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.741026 kubelet[3359]: E0417 23:46:22.739034 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.741026 kubelet[3359]: W0417 23:46:22.739050 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.741026 kubelet[3359]: E0417 23:46:22.739066 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.766569 kubelet[3359]: E0417 23:46:22.766538 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.766569 kubelet[3359]: W0417 23:46:22.766561 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.766805 kubelet[3359]: E0417 23:46:22.766583 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.766934 kubelet[3359]: E0417 23:46:22.766911 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.766934 kubelet[3359]: W0417 23:46:22.766930 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.767084 kubelet[3359]: E0417 23:46:22.766946 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.767282 kubelet[3359]: E0417 23:46:22.767260 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.767282 kubelet[3359]: W0417 23:46:22.767276 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.767419 kubelet[3359]: E0417 23:46:22.767292 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.767641 kubelet[3359]: E0417 23:46:22.767623 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.767641 kubelet[3359]: W0417 23:46:22.767637 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.767764 kubelet[3359]: E0417 23:46:22.767652 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.767945 kubelet[3359]: E0417 23:46:22.767929 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.767945 kubelet[3359]: W0417 23:46:22.767942 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.768759 kubelet[3359]: E0417 23:46:22.767955 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.768759 kubelet[3359]: E0417 23:46:22.768390 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.768759 kubelet[3359]: W0417 23:46:22.768400 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.768759 kubelet[3359]: E0417 23:46:22.768410 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.768943 kubelet[3359]: E0417 23:46:22.768925 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.768943 kubelet[3359]: W0417 23:46:22.768940 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.769081 kubelet[3359]: E0417 23:46:22.768954 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.769256 kubelet[3359]: E0417 23:46:22.769237 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.769256 kubelet[3359]: W0417 23:46:22.769253 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.769387 kubelet[3359]: E0417 23:46:22.769266 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.769549 kubelet[3359]: E0417 23:46:22.769533 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.769549 kubelet[3359]: W0417 23:46:22.769544 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.769810 kubelet[3359]: E0417 23:46:22.769557 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.770803 kubelet[3359]: E0417 23:46:22.770686 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.770803 kubelet[3359]: W0417 23:46:22.770700 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.770803 kubelet[3359]: E0417 23:46:22.770713 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.771126 kubelet[3359]: E0417 23:46:22.771103 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.771126 kubelet[3359]: W0417 23:46:22.771120 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.771372 kubelet[3359]: E0417 23:46:22.771135 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.771760 kubelet[3359]: E0417 23:46:22.771575 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.771760 kubelet[3359]: W0417 23:46:22.771589 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.771760 kubelet[3359]: E0417 23:46:22.771601 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.772354 kubelet[3359]: E0417 23:46:22.772277 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.772354 kubelet[3359]: W0417 23:46:22.772293 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.772354 kubelet[3359]: E0417 23:46:22.772306 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.772764 kubelet[3359]: E0417 23:46:22.772746 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.772764 kubelet[3359]: W0417 23:46:22.772760 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.772870 kubelet[3359]: E0417 23:46:22.772774 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.773239 kubelet[3359]: E0417 23:46:22.773220 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.773239 kubelet[3359]: W0417 23:46:22.773235 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.773491 kubelet[3359]: E0417 23:46:22.773249 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.773556 kubelet[3359]: E0417 23:46:22.773518 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.773556 kubelet[3359]: W0417 23:46:22.773530 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.773664 kubelet[3359]: E0417 23:46:22.773554 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.774233 kubelet[3359]: E0417 23:46:22.774215 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.774233 kubelet[3359]: W0417 23:46:22.774229 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.774343 kubelet[3359]: E0417 23:46:22.774245 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:22.774529 kubelet[3359]: E0417 23:46:22.774512 3359 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:22.774529 kubelet[3359]: W0417 23:46:22.774527 3359 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:22.774617 kubelet[3359]: E0417 23:46:22.774541 3359 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.253020 containerd[2098]: time="2026-04-17T23:46:23.252921791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.255945 containerd[2098]: time="2026-04-17T23:46:23.254665728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:46:23.258298 containerd[2098]: time="2026-04-17T23:46:23.257030002Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.283589 containerd[2098]: time="2026-04-17T23:46:23.280716433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.283589 containerd[2098]: time="2026-04-17T23:46:23.281850590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.635334514s" Apr 17 23:46:23.283589 containerd[2098]: time="2026-04-17T23:46:23.281892031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:46:23.289130 containerd[2098]: time="2026-04-17T23:46:23.289094241Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:46:23.310183 containerd[2098]: time="2026-04-17T23:46:23.310140460Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165\"" Apr 17 23:46:23.312206 containerd[2098]: time="2026-04-17T23:46:23.311181472Z" level=info msg="StartContainer for \"012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165\"" Apr 17 23:46:23.398312 containerd[2098]: time="2026-04-17T23:46:23.398217763Z" level=info msg="StartContainer for \"012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165\" returns successfully" Apr 17 23:46:23.494825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165-rootfs.mount: Deactivated successfully. Apr 17 23:46:23.586310 containerd[2098]: time="2026-04-17T23:46:23.564333995Z" level=info msg="shim disconnected" id=012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165 namespace=k8s.io Apr 17 23:46:23.586310 containerd[2098]: time="2026-04-17T23:46:23.586307317Z" level=warning msg="cleaning up after shim disconnected" id=012b8b69f39a9f8f905ee9ab0c9431fc373c2117f2878ed883cd041cb3af6165 namespace=k8s.io Apr 17 23:46:23.586629 containerd[2098]: time="2026-04-17T23:46:23.586327466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:23.701107 containerd[2098]: time="2026-04-17T23:46:23.700951151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:46:24.558522 kubelet[3359]: E0417 23:46:24.558475 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:26.562109 kubelet[3359]: E0417 23:46:26.556591 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:28.557025 kubelet[3359]: E0417 23:46:28.556956 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:30.556856 kubelet[3359]: E0417 23:46:30.556815 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:31.085994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990647082.mount: Deactivated successfully. Apr 17 23:46:31.149533 containerd[2098]: time="2026-04-17T23:46:31.148103801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:46:31.149533 containerd[2098]: time="2026-04-17T23:46:31.137461395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:31.151044 containerd[2098]: time="2026-04-17T23:46:31.150985067Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:31.153134 containerd[2098]: time="2026-04-17T23:46:31.152244181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:31.153134 containerd[2098]: time="2026-04-17T23:46:31.152970644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.451973957s" Apr 17 23:46:31.153134 containerd[2098]: time="2026-04-17T23:46:31.153024498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:46:31.160628 containerd[2098]: time="2026-04-17T23:46:31.160583228Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:46:31.197180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058755799.mount: Deactivated successfully. Apr 17 23:46:31.201902 containerd[2098]: time="2026-04-17T23:46:31.201858628Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb\"" Apr 17 23:46:31.210140 containerd[2098]: time="2026-04-17T23:46:31.209612449Z" level=info msg="StartContainer for \"27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb\"" Apr 17 23:46:31.343342 containerd[2098]: time="2026-04-17T23:46:31.343191974Z" level=info msg="StartContainer for \"27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb\" returns successfully" Apr 17 23:46:31.678170 containerd[2098]: time="2026-04-17T23:46:31.677922256Z" level=info msg="shim disconnected" id=27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb namespace=k8s.io Apr 17 23:46:31.678170 containerd[2098]: time="2026-04-17T23:46:31.677992288Z" level=warning msg="cleaning up after shim disconnected" id=27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb namespace=k8s.io Apr 17 23:46:31.678170 containerd[2098]: time="2026-04-17T23:46:31.678022237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:31.694179 containerd[2098]: time="2026-04-17T23:46:31.694109055Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:31.723700 containerd[2098]: time="2026-04-17T23:46:31.723329026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:46:32.086028 systemd[1]: run-containerd-runc-k8s.io-27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb-runc.ZHeofJ.mount: Deactivated successfully. Apr 17 23:46:32.086748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27b27d948724bb8e27c9c2a5e373c64e27d971064738299a8696ca008897ebbb-rootfs.mount: Deactivated successfully. Apr 17 23:46:32.556498 kubelet[3359]: E0417 23:46:32.555543 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:32.925083 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:32.925384 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:32.925430 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:34.556456 kubelet[3359]: E0417 23:46:34.556401 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:35.715124 containerd[2098]: time="2026-04-17T23:46:35.715076927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:35.716484 containerd[2098]: time="2026-04-17T23:46:35.716337953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:46:35.718664 containerd[2098]: time="2026-04-17T23:46:35.717830188Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:35.735031 containerd[2098]: time="2026-04-17T23:46:35.733332015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:35.735872 containerd[2098]: time="2026-04-17T23:46:35.735832452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.012459566s" Apr 17 23:46:35.736038 containerd[2098]: time="2026-04-17T23:46:35.735994584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:46:35.744309 containerd[2098]: time="2026-04-17T23:46:35.744263765Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:46:35.769273 containerd[2098]: time="2026-04-17T23:46:35.769218582Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21\"" Apr 17 23:46:35.770051 containerd[2098]: time="2026-04-17T23:46:35.769996625Z" level=info msg="StartContainer for \"2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21\"" Apr 17 23:46:35.850389 containerd[2098]: time="2026-04-17T23:46:35.850331471Z" level=info msg="StartContainer for \"2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21\" returns successfully" Apr 17 23:46:36.556850 kubelet[3359]: E0417 23:46:36.556249 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:36.961898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21-rootfs.mount: Deactivated successfully. Apr 17 23:46:36.969364 containerd[2098]: time="2026-04-17T23:46:36.969300563Z" level=info msg="shim disconnected" id=2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21 namespace=k8s.io Apr 17 23:46:36.970103 containerd[2098]: time="2026-04-17T23:46:36.969365674Z" level=warning msg="cleaning up after shim disconnected" id=2a77d23009b57427d5fec4efa5210b4572ccb877778d1b157b6d563896250a21 namespace=k8s.io Apr 17 23:46:36.970103 containerd[2098]: time="2026-04-17T23:46:36.969378040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:37.049624 kubelet[3359]: I0417 23:46:37.047040 3359 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:46:37.333558 kubelet[3359]: I0417 23:46:37.333515 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-backend-key-pair\") pod \"whisker-c965584d8-dhqvj\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:37.334626 kubelet[3359]: I0417 23:46:37.333604 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtr2\" (UniqueName: \"kubernetes.io/projected/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-kube-api-access-6jtr2\") pod \"whisker-c965584d8-dhqvj\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:37.334626 kubelet[3359]: I0417 23:46:37.333642 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxdwz\" (UniqueName: \"kubernetes.io/projected/32eba8f7-7133-4333-b018-bb3755f88966-kube-api-access-cxdwz\") pod \"coredns-674b8bbfcf-j45qt\" (UID: \"32eba8f7-7133-4333-b018-bb3755f88966\") " pod="kube-system/coredns-674b8bbfcf-j45qt" Apr 17 23:46:37.334626 kubelet[3359]: I0417 23:46:37.333665 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c34b680-c1de-441d-83fe-9024cfa08c4f-tigera-ca-bundle\") pod \"calico-kube-controllers-76479b7b8b-5q868\" (UID: \"0c34b680-c1de-441d-83fe-9024cfa08c4f\") " pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" Apr 17 23:46:37.334626 kubelet[3359]: I0417 23:46:37.333697 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3bfd7323-fa86-459b-911b-3e898630bb72-calico-apiserver-certs\") pod \"calico-apiserver-64bcf5fd68-fg4l8\" (UID: \"3bfd7323-fa86-459b-911b-3e898630bb72\") " pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" Apr 17 23:46:37.334626 kubelet[3359]: I0417 23:46:37.333722 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkccn\" (UniqueName: \"kubernetes.io/projected/0c34b680-c1de-441d-83fe-9024cfa08c4f-kube-api-access-pkccn\") pod \"calico-kube-controllers-76479b7b8b-5q868\" (UID: \"0c34b680-c1de-441d-83fe-9024cfa08c4f\") " pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" Apr 17 23:46:37.337167 kubelet[3359]: I0417 23:46:37.333751 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28650620-bee7-44f0-92c4-6968b30d2305-config\") pod \"goldmane-5b85766d88-c6h7b\" (UID: \"28650620-bee7-44f0-92c4-6968b30d2305\") " pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:37.337167 kubelet[3359]: I0417 23:46:37.333784 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z569t\" (UniqueName: \"kubernetes.io/projected/2ac59079-9564-4c61-aa81-95cd5165fe7e-kube-api-access-z569t\") pod \"calico-apiserver-64bcf5fd68-9j2x7\" (UID: \"2ac59079-9564-4c61-aa81-95cd5165fe7e\") " pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" Apr 17 23:46:37.337167 kubelet[3359]: I0417 23:46:37.333810 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhd2l\" (UniqueName: \"kubernetes.io/projected/3bfd7323-fa86-459b-911b-3e898630bb72-kube-api-access-bhd2l\") pod \"calico-apiserver-64bcf5fd68-fg4l8\" (UID: \"3bfd7323-fa86-459b-911b-3e898630bb72\") " pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" Apr 17 23:46:37.337167 kubelet[3359]: I0417 23:46:37.333831 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28650620-bee7-44f0-92c4-6968b30d2305-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-c6h7b\" (UID: \"28650620-bee7-44f0-92c4-6968b30d2305\") " pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:37.337167 kubelet[3359]: I0417 23:46:37.333859 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f71864c-b114-4755-a146-e6f57d67291e-config-volume\") pod \"coredns-674b8bbfcf-7ggvd\" (UID: \"4f71864c-b114-4755-a146-e6f57d67291e\") " pod="kube-system/coredns-674b8bbfcf-7ggvd" Apr 17 23:46:37.337390 kubelet[3359]: I0417 23:46:37.333882 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-nginx-config\") pod \"whisker-c965584d8-dhqvj\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:37.337390 kubelet[3359]: I0417 23:46:37.333903 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32eba8f7-7133-4333-b018-bb3755f88966-config-volume\") pod \"coredns-674b8bbfcf-j45qt\" (UID: \"32eba8f7-7133-4333-b018-bb3755f88966\") " pod="kube-system/coredns-674b8bbfcf-j45qt" Apr 17 23:46:37.337390 kubelet[3359]: I0417 23:46:37.333925 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/28650620-bee7-44f0-92c4-6968b30d2305-goldmane-key-pair\") pod \"goldmane-5b85766d88-c6h7b\" (UID: \"28650620-bee7-44f0-92c4-6968b30d2305\") " pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:37.337390 kubelet[3359]: I0417 23:46:37.333961 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-ca-bundle\") pod \"whisker-c965584d8-dhqvj\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:37.337390 kubelet[3359]: I0417 23:46:37.333984 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm67w\" (UniqueName: \"kubernetes.io/projected/28650620-bee7-44f0-92c4-6968b30d2305-kube-api-access-tm67w\") pod \"goldmane-5b85766d88-c6h7b\" (UID: \"28650620-bee7-44f0-92c4-6968b30d2305\") " pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:37.337608 kubelet[3359]: I0417 23:46:37.334031 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ac59079-9564-4c61-aa81-95cd5165fe7e-calico-apiserver-certs\") pod \"calico-apiserver-64bcf5fd68-9j2x7\" (UID: \"2ac59079-9564-4c61-aa81-95cd5165fe7e\") " pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" Apr 17 23:46:37.337608 kubelet[3359]: I0417 23:46:37.334063 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52qh\" (UniqueName: \"kubernetes.io/projected/4f71864c-b114-4755-a146-e6f57d67291e-kube-api-access-h52qh\") pod \"coredns-674b8bbfcf-7ggvd\" (UID: \"4f71864c-b114-4755-a146-e6f57d67291e\") " pod="kube-system/coredns-674b8bbfcf-7ggvd" Apr 17 23:46:37.546589 containerd[2098]: time="2026-04-17T23:46:37.546542194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j45qt,Uid:32eba8f7-7133-4333-b018-bb3755f88966,Namespace:kube-system,Attempt:0,}" Apr 17 23:46:37.547714 containerd[2098]: time="2026-04-17T23:46:37.547522717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7ggvd,Uid:4f71864c-b114-4755-a146-e6f57d67291e,Namespace:kube-system,Attempt:0,}" Apr 17 23:46:37.558023 containerd[2098]: time="2026-04-17T23:46:37.556519629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76479b7b8b-5q868,Uid:0c34b680-c1de-441d-83fe-9024cfa08c4f,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:37.586617 containerd[2098]: time="2026-04-17T23:46:37.586411367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-9j2x7,Uid:2ac59079-9564-4c61-aa81-95cd5165fe7e,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:37.592093 containerd[2098]: time="2026-04-17T23:46:37.592041449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c965584d8-dhqvj,Uid:e99467cd-624e-41c3-80c6-c0cb63e8e3b3,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:37.593316 containerd[2098]: time="2026-04-17T23:46:37.593271270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6h7b,Uid:28650620-bee7-44f0-92c4-6968b30d2305,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:37.593539 containerd[2098]: time="2026-04-17T23:46:37.593517597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-fg4l8,Uid:3bfd7323-fa86-459b-911b-3e898630bb72,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:37.801050 containerd[2098]: time="2026-04-17T23:46:37.798760768Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:46:37.843541 containerd[2098]: time="2026-04-17T23:46:37.841597913Z" level=info msg="CreateContainer within sandbox \"b98863441c28b9e2bda38c7188d3b9265303f93a3f12040d6c81e7f498bb924b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"56d87e0b64d0c54f1657bab8be2e360dc0eb9205d63441dd80dc74305ed3977a\"" Apr 17 23:46:37.845149 containerd[2098]: time="2026-04-17T23:46:37.845088684Z" level=info msg="StartContainer for \"56d87e0b64d0c54f1657bab8be2e360dc0eb9205d63441dd80dc74305ed3977a\"" Apr 17 23:46:38.067227 containerd[2098]: time="2026-04-17T23:46:38.065903365Z" level=info msg="StartContainer for \"56d87e0b64d0c54f1657bab8be2e360dc0eb9205d63441dd80dc74305ed3977a\" returns successfully" Apr 17 23:46:38.258126 containerd[2098]: time="2026-04-17T23:46:38.257587433Z" level=error msg="Failed to destroy network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.263772 containerd[2098]: time="2026-04-17T23:46:38.258098209Z" level=error msg="encountered an error cleaning up failed sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.266234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023-shm.mount: Deactivated successfully. Apr 17 23:46:38.274025 containerd[2098]: time="2026-04-17T23:46:38.271897206Z" level=error msg="Failed to destroy network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.274025 containerd[2098]: time="2026-04-17T23:46:38.272552880Z" level=error msg="encountered an error cleaning up failed sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.281205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81-shm.mount: Deactivated successfully. Apr 17 23:46:38.305833 containerd[2098]: time="2026-04-17T23:46:38.305778758Z" level=error msg="Failed to destroy network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.306702 containerd[2098]: time="2026-04-17T23:46:38.306172012Z" level=error msg="encountered an error cleaning up failed sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.306702 containerd[2098]: time="2026-04-17T23:46:38.306240541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-9j2x7,Uid:2ac59079-9564-4c61-aa81-95cd5165fe7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.312661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b-shm.mount: Deactivated successfully. Apr 17 23:46:38.313672 containerd[2098]: time="2026-04-17T23:46:38.313327899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76479b7b8b-5q868,Uid:0c34b680-c1de-441d-83fe-9024cfa08c4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.313672 containerd[2098]: time="2026-04-17T23:46:38.313493393Z" level=error msg="Failed to destroy network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.316029 containerd[2098]: time="2026-04-17T23:46:38.314300204Z" level=error msg="encountered an error cleaning up failed sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.316029 containerd[2098]: time="2026-04-17T23:46:38.314377142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j45qt,Uid:32eba8f7-7133-4333-b018-bb3755f88966,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.316029 containerd[2098]: time="2026-04-17T23:46:38.314442512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7ggvd,Uid:4f71864c-b114-4755-a146-e6f57d67291e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.319102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269-shm.mount: Deactivated successfully. Apr 17 23:46:38.324639 containerd[2098]: time="2026-04-17T23:46:38.324421626Z" level=error msg="Failed to destroy network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.324803 containerd[2098]: time="2026-04-17T23:46:38.324769949Z" level=error msg="encountered an error cleaning up failed sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.324859 containerd[2098]: time="2026-04-17T23:46:38.324830179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-fg4l8,Uid:3bfd7323-fa86-459b-911b-3e898630bb72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.324971590Z" level=error msg="Failed to destroy network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.325351666Z" level=error msg="encountered an error cleaning up failed sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.325406363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c965584d8-dhqvj,Uid:e99467cd-624e-41c3-80c6-c0cb63e8e3b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.325519569Z" level=error msg="Failed to destroy network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.325781834Z" level=error msg="encountered an error cleaning up failed sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.326478 containerd[2098]: time="2026-04-17T23:46:38.325820842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6h7b,Uid:28650620-bee7-44f0-92c4-6968b30d2305,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.335340 kubelet[3359]: E0417 23:46:38.335186 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.335340 kubelet[3359]: E0417 23:46:38.335281 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:38.335959 kubelet[3359]: E0417 23:46:38.335536 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.335959 kubelet[3359]: E0417 23:46:38.335587 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" Apr 17 23:46:38.338802 kubelet[3359]: E0417 23:46:38.338506 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" Apr 17 23:46:38.338802 kubelet[3359]: E0417 23:46:38.338582 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.338802 kubelet[3359]: E0417 23:46:38.338601 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64bcf5fd68-9j2x7_calico-system(2ac59079-9564-4c61-aa81-95cd5165fe7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64bcf5fd68-9j2x7_calico-system(2ac59079-9564-4c61-aa81-95cd5165fe7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" podUID="2ac59079-9564-4c61-aa81-95cd5165fe7e" Apr 17 23:46:38.339061 kubelet[3359]: E0417 23:46:38.338619 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" Apr 17 23:46:38.339061 kubelet[3359]: E0417 23:46:38.338506 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-c6h7b" Apr 17 23:46:38.339061 kubelet[3359]: E0417 23:46:38.338641 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" Apr 17 23:46:38.339209 kubelet[3359]: E0417 23:46:38.338672 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-c6h7b_calico-system(28650620-bee7-44f0-92c4-6968b30d2305)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-c6h7b_calico-system(28650620-bee7-44f0-92c4-6968b30d2305)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-c6h7b" podUID="28650620-bee7-44f0-92c4-6968b30d2305" Apr 17 23:46:38.339209 kubelet[3359]: E0417 23:46:38.338960 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.339209 kubelet[3359]: E0417 23:46:38.338999 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7ggvd" Apr 17 23:46:38.340400 kubelet[3359]: E0417 23:46:38.339036 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7ggvd" Apr 17 23:46:38.340400 kubelet[3359]: E0417 23:46:38.339092 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7ggvd_kube-system(4f71864c-b114-4755-a146-e6f57d67291e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7ggvd_kube-system(4f71864c-b114-4755-a146-e6f57d67291e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7ggvd" podUID="4f71864c-b114-4755-a146-e6f57d67291e" Apr 17 23:46:38.340400 kubelet[3359]: E0417 23:46:38.339136 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.340560 kubelet[3359]: E0417 23:46:38.339161 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j45qt" Apr 17 23:46:38.340560 kubelet[3359]: E0417 23:46:38.339181 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j45qt" Apr 17 23:46:38.340560 kubelet[3359]: E0417 23:46:38.339219 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j45qt_kube-system(32eba8f7-7133-4333-b018-bb3755f88966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j45qt_kube-system(32eba8f7-7133-4333-b018-bb3755f88966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j45qt" podUID="32eba8f7-7133-4333-b018-bb3755f88966" Apr 17 23:46:38.340838 kubelet[3359]: E0417 23:46:38.339255 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.340838 kubelet[3359]: E0417 23:46:38.339277 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" Apr 17 23:46:38.340838 kubelet[3359]: E0417 23:46:38.338681 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76479b7b8b-5q868_calico-system(0c34b680-c1de-441d-83fe-9024cfa08c4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76479b7b8b-5q868_calico-system(0c34b680-c1de-441d-83fe-9024cfa08c4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" podUID="0c34b680-c1de-441d-83fe-9024cfa08c4f" Apr 17 23:46:38.340959 kubelet[3359]: E0417 23:46:38.339294 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" Apr 17 23:46:38.340959 kubelet[3359]: E0417 23:46:38.339321 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.340959 kubelet[3359]: E0417 23:46:38.339335 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64bcf5fd68-fg4l8_calico-system(3bfd7323-fa86-459b-911b-3e898630bb72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64bcf5fd68-fg4l8_calico-system(3bfd7323-fa86-459b-911b-3e898630bb72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" podUID="3bfd7323-fa86-459b-911b-3e898630bb72" Apr 17 23:46:38.341315 kubelet[3359]: E0417 23:46:38.339354 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:38.341315 kubelet[3359]: E0417 23:46:38.339372 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c965584d8-dhqvj" Apr 17 23:46:38.341315 kubelet[3359]: E0417 23:46:38.339427 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c965584d8-dhqvj_calico-system(e99467cd-624e-41c3-80c6-c0cb63e8e3b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c965584d8-dhqvj_calico-system(e99467cd-624e-41c3-80c6-c0cb63e8e3b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c965584d8-dhqvj" podUID="e99467cd-624e-41c3-80c6-c0cb63e8e3b3" Apr 17 23:46:38.567681 containerd[2098]: time="2026-04-17T23:46:38.567517641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch9lf,Uid:15f67ed1-2981-42fd-8b37-94a71c9f9349,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:38.644522 containerd[2098]: time="2026-04-17T23:46:38.644465071Z" level=error msg="Failed to destroy network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.644869 containerd[2098]: time="2026-04-17T23:46:38.644824245Z" level=error msg="encountered an error cleaning up failed sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.645636 containerd[2098]: time="2026-04-17T23:46:38.644889581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch9lf,Uid:15f67ed1-2981-42fd-8b37-94a71c9f9349,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.645831 kubelet[3359]: E0417 23:46:38.645202 3359 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:46:38.645831 kubelet[3359]: E0417 23:46:38.645267 3359 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:38.645831 kubelet[3359]: E0417 23:46:38.645289 3359 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ch9lf" Apr 17 23:46:38.646072 kubelet[3359]: E0417 23:46:38.645338 3359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ch9lf_calico-system(15f67ed1-2981-42fd-8b37-94a71c9f9349)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ch9lf_calico-system(15f67ed1-2981-42fd-8b37-94a71c9f9349)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ch9lf" podUID="15f67ed1-2981-42fd-8b37-94a71c9f9349" Apr 17 23:46:38.791600 kubelet[3359]: I0417 23:46:38.791332 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:46:38.895426 kubelet[3359]: I0417 23:46:38.894593 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:46:38.924130 kubelet[3359]: I0417 23:46:38.924099 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:46:38.967235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036-shm.mount: Deactivated successfully. Apr 17 23:46:38.967434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595-shm.mount: Deactivated successfully. Apr 17 23:46:38.967578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0-shm.mount: Deactivated successfully. Apr 17 23:46:39.064926 containerd[2098]: time="2026-04-17T23:46:39.064384589Z" level=info msg="StopPodSandbox for \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\"" Apr 17 23:46:39.071467 containerd[2098]: time="2026-04-17T23:46:39.070528645Z" level=info msg="StopPodSandbox for \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\"" Apr 17 23:46:39.074769 containerd[2098]: time="2026-04-17T23:46:39.074385411Z" level=info msg="Ensure that sandbox 955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595 in task-service has been cleanup successfully" Apr 17 23:46:39.075574 containerd[2098]: time="2026-04-17T23:46:39.075541057Z" level=info msg="Ensure that sandbox dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe in task-service has been cleanup successfully" Apr 17 23:46:39.097687 containerd[2098]: time="2026-04-17T23:46:39.097646191Z" level=info msg="StopPodSandbox for \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\"" Apr 17 23:46:39.097874 containerd[2098]: time="2026-04-17T23:46:39.097848627Z" level=info msg="Ensure that sandbox 6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036 in task-service has been cleanup successfully" Apr 17 23:46:39.113509 kubelet[3359]: I0417 23:46:39.113467 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:46:39.122462 containerd[2098]: time="2026-04-17T23:46:39.122303662Z" level=info msg="StopPodSandbox for \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\"" Apr 17 23:46:39.128099 kubelet[3359]: I0417 23:46:39.127627 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:46:39.129080 containerd[2098]: time="2026-04-17T23:46:39.129047122Z" level=info msg="StopPodSandbox for \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\"" Apr 17 23:46:39.131684 kubelet[3359]: I0417 23:46:39.130488 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:46:39.131818 containerd[2098]: time="2026-04-17T23:46:39.131116911Z" level=info msg="StopPodSandbox for \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\"" Apr 17 23:46:39.131818 containerd[2098]: time="2026-04-17T23:46:39.131383668Z" level=info msg="Ensure that sandbox 54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81 in task-service has been cleanup successfully" Apr 17 23:46:39.133043 kubelet[3359]: I0417 23:46:39.133022 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:46:39.134082 containerd[2098]: time="2026-04-17T23:46:39.134055296Z" level=info msg="Ensure that sandbox 9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0 in task-service has been cleanup successfully" Apr 17 23:46:39.135124 containerd[2098]: time="2026-04-17T23:46:39.135082840Z" level=info msg="Ensure that sandbox 8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b in task-service has been cleanup successfully" Apr 17 23:46:39.136243 containerd[2098]: time="2026-04-17T23:46:39.136218400Z" level=info msg="StopPodSandbox for \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\"" Apr 17 23:46:39.137150 containerd[2098]: time="2026-04-17T23:46:39.137125192Z" level=info msg="Ensure that sandbox 220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023 in task-service has been cleanup successfully" Apr 17 23:46:39.150575 kubelet[3359]: I0417 23:46:39.150485 3359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:46:39.152089 containerd[2098]: time="2026-04-17T23:46:39.151870007Z" level=info msg="StopPodSandbox for \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\"" Apr 17 23:46:39.153224 containerd[2098]: time="2026-04-17T23:46:39.153194096Z" level=info msg="Ensure that sandbox f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269 in task-service has been cleanup successfully" Apr 17 23:46:39.284534 systemd[1]: run-containerd-runc-k8s.io-56d87e0b64d0c54f1657bab8be2e360dc0eb9205d63441dd80dc74305ed3977a-runc.7ThuOR.mount: Deactivated successfully. Apr 17 23:46:39.481493 kubelet[3359]: I0417 23:46:39.476417 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jscr6" podStartSLOduration=5.044838576 podStartE2EDuration="21.476389465s" podCreationTimestamp="2026-04-17 23:46:18 +0000 UTC" firstStartedPulling="2026-04-17 23:46:19.307084373 +0000 UTC m=+18.944272031" lastFinishedPulling="2026-04-17 23:46:35.73863526 +0000 UTC m=+35.375822920" observedRunningTime="2026-04-17 23:46:38.894342755 +0000 UTC m=+38.531530435" watchObservedRunningTime="2026-04-17 23:46:39.476389465 +0000 UTC m=+39.113577146" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.596 [INFO][4816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.596 [INFO][4816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" iface="eth0" netns="/var/run/netns/cni-0f10354c-68a6-6377-fee3-ac670d1d8cf2" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.596 [INFO][4816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" iface="eth0" netns="/var/run/netns/cni-0f10354c-68a6-6377-fee3-ac670d1d8cf2" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.597 [INFO][4816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" iface="eth0" netns="/var/run/netns/cni-0f10354c-68a6-6377-fee3-ac670d1d8cf2" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.597 [INFO][4816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.597 [INFO][4816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.761 [INFO][4934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.762 [INFO][4934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.762 [INFO][4934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.804 [WARNING][4934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.804 [INFO][4934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.806 [INFO][4934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.824246 containerd[2098]: 2026-04-17 23:46:39.817 [INFO][4816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:46:39.827236 containerd[2098]: time="2026-04-17T23:46:39.825428660Z" level=info msg="TearDown network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" successfully" Apr 17 23:46:39.827236 containerd[2098]: time="2026-04-17T23:46:39.827074216Z" level=info msg="StopPodSandbox for \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" returns successfully" Apr 17 23:46:39.829391 containerd[2098]: time="2026-04-17T23:46:39.828767902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-9j2x7,Uid:2ac59079-9564-4c61-aa81-95cd5165fe7e,Namespace:calico-system,Attempt:1,}" Apr 17 23:46:39.831666 systemd[1]: run-netns-cni\x2d0f10354c\x2d68a6\x2d6377\x2dfee3\x2dac670d1d8cf2.mount: Deactivated successfully. Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.472 [INFO][4848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.480 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" iface="eth0" netns="/var/run/netns/cni-41f6466c-c88b-5da4-c0bc-5f140cd8d763" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.485 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" iface="eth0" netns="/var/run/netns/cni-41f6466c-c88b-5da4-c0bc-5f140cd8d763" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.487 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" iface="eth0" netns="/var/run/netns/cni-41f6466c-c88b-5da4-c0bc-5f140cd8d763" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.495 [INFO][4848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.497 [INFO][4848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.806 [INFO][4900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.807 [INFO][4900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.807 [INFO][4900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.814 [WARNING][4900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.814 [INFO][4900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.817 [INFO][4900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.835196 containerd[2098]: 2026-04-17 23:46:39.821 [INFO][4848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:46:39.848233 containerd[2098]: time="2026-04-17T23:46:39.843818742Z" level=info msg="TearDown network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" successfully" Apr 17 23:46:39.848233 containerd[2098]: time="2026-04-17T23:46:39.843862491Z" level=info msg="StopPodSandbox for \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" returns successfully" Apr 17 23:46:39.848233 containerd[2098]: time="2026-04-17T23:46:39.844639403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j45qt,Uid:32eba8f7-7133-4333-b018-bb3755f88966,Namespace:kube-system,Attempt:1,}" Apr 17 23:46:39.847611 systemd[1]: run-netns-cni\x2d41f6466c\x2dc88b\x2d5da4\x2dc0bc\x2d5f140cd8d763.mount: Deactivated successfully. Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.495 [INFO][4850] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.497 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" iface="eth0" netns="/var/run/netns/cni-a2eb6415-3d68-faa6-83ef-2e1d2c308517" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.500 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" iface="eth0" netns="/var/run/netns/cni-a2eb6415-3d68-faa6-83ef-2e1d2c308517" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.501 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" iface="eth0" netns="/var/run/netns/cni-a2eb6415-3d68-faa6-83ef-2e1d2c308517" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.508 [INFO][4850] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.510 [INFO][4850] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.785 [INFO][4902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.786 [INFO][4902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.817 [INFO][4902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.835 [WARNING][4902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.835 [INFO][4902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.837 [INFO][4902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.858205 containerd[2098]: 2026-04-17 23:46:39.854 [INFO][4850] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:46:39.859730 containerd[2098]: time="2026-04-17T23:46:39.858900375Z" level=info msg="TearDown network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" successfully" Apr 17 23:46:39.859730 containerd[2098]: time="2026-04-17T23:46:39.858931300Z" level=info msg="StopPodSandbox for \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" returns successfully" Apr 17 23:46:39.862551 containerd[2098]: time="2026-04-17T23:46:39.861646211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76479b7b8b-5q868,Uid:0c34b680-c1de-441d-83fe-9024cfa08c4f,Namespace:calico-system,Attempt:1,}" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.556 [INFO][4768] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.557 [INFO][4768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" iface="eth0" netns="/var/run/netns/cni-28fddf0d-3142-1349-6ed4-e4e9b7f616cc" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.559 [INFO][4768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" iface="eth0" netns="/var/run/netns/cni-28fddf0d-3142-1349-6ed4-e4e9b7f616cc" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.562 [INFO][4768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" iface="eth0" netns="/var/run/netns/cni-28fddf0d-3142-1349-6ed4-e4e9b7f616cc" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.562 [INFO][4768] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.562 [INFO][4768] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.796 [INFO][4921] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.797 [INFO][4921] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.837 [INFO][4921] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.853 [WARNING][4921] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.853 [INFO][4921] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.854 [INFO][4921] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.878191 containerd[2098]: 2026-04-17 23:46:39.866 [INFO][4768] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:46:39.879026 containerd[2098]: time="2026-04-17T23:46:39.878983520Z" level=info msg="TearDown network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" successfully" Apr 17 23:46:39.879129 containerd[2098]: time="2026-04-17T23:46:39.879111608Z" level=info msg="StopPodSandbox for \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" returns successfully" Apr 17 23:46:39.881042 containerd[2098]: time="2026-04-17T23:46:39.880986180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch9lf,Uid:15f67ed1-2981-42fd-8b37-94a71c9f9349,Namespace:calico-system,Attempt:1,}" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.556 [INFO][4810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.557 [INFO][4810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" iface="eth0" netns="/var/run/netns/cni-50d56233-cb42-c7d5-52c8-06e34a14dbbb" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.558 [INFO][4810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" iface="eth0" netns="/var/run/netns/cni-50d56233-cb42-c7d5-52c8-06e34a14dbbb" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.562 [INFO][4810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" iface="eth0" netns="/var/run/netns/cni-50d56233-cb42-c7d5-52c8-06e34a14dbbb" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.563 [INFO][4810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.563 [INFO][4810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.816 [INFO][4920] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.816 [INFO][4920] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.856 [INFO][4920] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.871 [WARNING][4920] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.871 [INFO][4920] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.873 [INFO][4920] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.898211 containerd[2098]: 2026-04-17 23:46:39.884 [INFO][4810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.517 [INFO][4767] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.521 [INFO][4767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" iface="eth0" netns="/var/run/netns/cni-4326b6f0-cd42-054d-2b37-5cf74bc9eb86" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.521 [INFO][4767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" iface="eth0" netns="/var/run/netns/cni-4326b6f0-cd42-054d-2b37-5cf74bc9eb86" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.523 [INFO][4767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" iface="eth0" netns="/var/run/netns/cni-4326b6f0-cd42-054d-2b37-5cf74bc9eb86" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.523 [INFO][4767] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.523 [INFO][4767] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.797 [INFO][4903] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.797 [INFO][4903] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.873 [INFO][4903] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.884 [WARNING][4903] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.884 [INFO][4903] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.886 [INFO][4903] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.899234 containerd[2098]: 2026-04-17 23:46:39.894 [INFO][4767] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:46:39.899234 containerd[2098]: time="2026-04-17T23:46:39.898671665Z" level=info msg="TearDown network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" successfully" Apr 17 23:46:39.899234 containerd[2098]: time="2026-04-17T23:46:39.898701436Z" level=info msg="StopPodSandbox for \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" returns successfully" Apr 17 23:46:39.899234 containerd[2098]: time="2026-04-17T23:46:39.898890739Z" level=info msg="TearDown network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" successfully" Apr 17 23:46:39.899234 containerd[2098]: time="2026-04-17T23:46:39.898909547Z" level=info msg="StopPodSandbox for \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" returns successfully" Apr 17 23:46:39.902421 containerd[2098]: time="2026-04-17T23:46:39.902348022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6h7b,Uid:28650620-bee7-44f0-92c4-6968b30d2305,Namespace:calico-system,Attempt:1,}" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.562 [INFO][4819] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.563 [INFO][4819] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" iface="eth0" netns="/var/run/netns/cni-39ca78a9-33da-e042-d8dd-4e757e9de908" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.564 [INFO][4819] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" iface="eth0" netns="/var/run/netns/cni-39ca78a9-33da-e042-d8dd-4e757e9de908" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.564 [INFO][4819] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" iface="eth0" netns="/var/run/netns/cni-39ca78a9-33da-e042-d8dd-4e757e9de908" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.564 [INFO][4819] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.565 [INFO][4819] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.799 [INFO][4923] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.799 [INFO][4923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.886 [INFO][4923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.901 [WARNING][4923] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.901 [INFO][4923] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.903 [INFO][4923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.927452 containerd[2098]: 2026-04-17 23:46:39.912 [INFO][4819] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:46:39.929941 containerd[2098]: time="2026-04-17T23:46:39.927557474Z" level=info msg="TearDown network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" successfully" Apr 17 23:46:39.929941 containerd[2098]: time="2026-04-17T23:46:39.927587148Z" level=info msg="StopPodSandbox for \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" returns successfully" Apr 17 23:46:39.933330 containerd[2098]: time="2026-04-17T23:46:39.932987503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-fg4l8,Uid:3bfd7323-fa86-459b-911b-3e898630bb72,Namespace:calico-system,Attempt:1,}" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.491 [INFO][4817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.496 [INFO][4817] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" iface="eth0" netns="/var/run/netns/cni-1b3f1322-a5d8-9641-9127-378a94919d56" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.497 [INFO][4817] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" iface="eth0" netns="/var/run/netns/cni-1b3f1322-a5d8-9641-9127-378a94919d56" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.499 [INFO][4817] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" iface="eth0" netns="/var/run/netns/cni-1b3f1322-a5d8-9641-9127-378a94919d56" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.499 [INFO][4817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.499 [INFO][4817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.801 [INFO][4896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.802 [INFO][4896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.903 [INFO][4896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.915 [WARNING][4896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.915 [INFO][4896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.918 [INFO][4896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:39.941834 containerd[2098]: 2026-04-17 23:46:39.933 [INFO][4817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:46:39.943719 containerd[2098]: time="2026-04-17T23:46:39.942752156Z" level=info msg="TearDown network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" successfully" Apr 17 23:46:39.943719 containerd[2098]: time="2026-04-17T23:46:39.942796558Z" level=info msg="StopPodSandbox for \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" returns successfully" Apr 17 23:46:39.944677 containerd[2098]: time="2026-04-17T23:46:39.944146921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7ggvd,Uid:4f71864c-b114-4755-a146-e6f57d67291e,Namespace:kube-system,Attempt:1,}" Apr 17 23:46:39.972992 kubelet[3359]: I0417 23:46:39.972485 3359 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-nginx-config\") pod \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " Apr 17 23:46:39.972992 kubelet[3359]: I0417 23:46:39.972582 3359 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jtr2\" (UniqueName: \"kubernetes.io/projected/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-kube-api-access-6jtr2\") pod \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " Apr 17 23:46:39.972992 kubelet[3359]: I0417 23:46:39.972632 3359 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-ca-bundle\") pod \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " Apr 17 23:46:39.972992 kubelet[3359]: I0417 23:46:39.973073 3359 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-backend-key-pair\") pod \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\" (UID: \"e99467cd-624e-41c3-80c6-c0cb63e8e3b3\") " Apr 17 23:46:39.987282 systemd[1]: run-netns-cni\x2d28fddf0d\x2d3142\x2d1349\x2d6ed4\x2de4e9b7f616cc.mount: Deactivated successfully. Apr 17 23:46:39.987493 systemd[1]: run-netns-cni\x2da2eb6415\x2d3d68\x2dfaa6\x2d83ef\x2d2e1d2c308517.mount: Deactivated successfully. Apr 17 23:46:39.987643 systemd[1]: run-netns-cni\x2d39ca78a9\x2d33da\x2de042\x2dd8dd\x2d4e757e9de908.mount: Deactivated successfully. Apr 17 23:46:39.987773 systemd[1]: run-netns-cni\x2d4326b6f0\x2dcd42\x2d054d\x2d2b37\x2d5cf74bc9eb86.mount: Deactivated successfully. Apr 17 23:46:39.987902 systemd[1]: run-netns-cni\x2d50d56233\x2dcb42\x2dc7d5\x2d52c8\x2d06e34a14dbbb.mount: Deactivated successfully. Apr 17 23:46:39.990149 systemd[1]: run-netns-cni\x2d1b3f1322\x2da5d8\x2d9641\x2d9127\x2d378a94919d56.mount: Deactivated successfully. Apr 17 23:46:40.004917 kubelet[3359]: I0417 23:46:40.004857 3359 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e99467cd-624e-41c3-80c6-c0cb63e8e3b3" (UID: "e99467cd-624e-41c3-80c6-c0cb63e8e3b3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:46:40.005204 kubelet[3359]: I0417 23:46:40.005178 3359 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e99467cd-624e-41c3-80c6-c0cb63e8e3b3" (UID: "e99467cd-624e-41c3-80c6-c0cb63e8e3b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:46:40.007171 kubelet[3359]: I0417 23:46:39.999670 3359 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e99467cd-624e-41c3-80c6-c0cb63e8e3b3" (UID: "e99467cd-624e-41c3-80c6-c0cb63e8e3b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:46:40.009835 systemd[1]: var-lib-kubelet-pods-e99467cd\x2d624e\x2d41c3\x2d80c6\x2dc0cb63e8e3b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:46:40.010451 kubelet[3359]: I0417 23:46:40.010369 3359 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-kube-api-access-6jtr2" (OuterVolumeSpecName: "kube-api-access-6jtr2") pod "e99467cd-624e-41c3-80c6-c0cb63e8e3b3" (UID: "e99467cd-624e-41c3-80c6-c0cb63e8e3b3"). InnerVolumeSpecName "kube-api-access-6jtr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:46:40.024496 systemd[1]: var-lib-kubelet-pods-e99467cd\x2d624e\x2d41c3\x2d80c6\x2dc0cb63e8e3b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jtr2.mount: Deactivated successfully. Apr 17 23:46:40.074749 kubelet[3359]: I0417 23:46:40.074544 3359 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-backend-key-pair\") on node \"ip-172-31-16-149\" DevicePath \"\"" Apr 17 23:46:40.074749 kubelet[3359]: I0417 23:46:40.074587 3359 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-nginx-config\") on node \"ip-172-31-16-149\" DevicePath \"\"" Apr 17 23:46:40.074749 kubelet[3359]: I0417 23:46:40.074723 3359 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jtr2\" (UniqueName: \"kubernetes.io/projected/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-kube-api-access-6jtr2\") on node \"ip-172-31-16-149\" DevicePath \"\"" Apr 17 23:46:40.074749 kubelet[3359]: I0417 23:46:40.074737 3359 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99467cd-624e-41c3-80c6-c0cb63e8e3b3-whisker-ca-bundle\") on node \"ip-172-31-16-149\" DevicePath \"\"" Apr 17 23:46:40.574029 kubelet[3359]: I0417 23:46:40.569857 3359 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99467cd-624e-41c3-80c6-c0cb63e8e3b3" path="/var/lib/kubelet/pods/e99467cd-624e-41c3-80c6-c0cb63e8e3b3/volumes" Apr 17 23:46:40.592500 kubelet[3359]: I0417 23:46:40.592212 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c35b8840-ac49-4f48-82bf-2b42b6d47424-whisker-backend-key-pair\") pod \"whisker-74888fcb5-xlnn8\" (UID: \"c35b8840-ac49-4f48-82bf-2b42b6d47424\") " pod="calico-system/whisker-74888fcb5-xlnn8" Apr 17 23:46:40.592500 kubelet[3359]: I0417 23:46:40.592270 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c35b8840-ac49-4f48-82bf-2b42b6d47424-nginx-config\") pod \"whisker-74888fcb5-xlnn8\" (UID: \"c35b8840-ac49-4f48-82bf-2b42b6d47424\") " pod="calico-system/whisker-74888fcb5-xlnn8" Apr 17 23:46:40.592500 kubelet[3359]: I0417 23:46:40.592325 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c35b8840-ac49-4f48-82bf-2b42b6d47424-whisker-ca-bundle\") pod \"whisker-74888fcb5-xlnn8\" (UID: \"c35b8840-ac49-4f48-82bf-2b42b6d47424\") " pod="calico-system/whisker-74888fcb5-xlnn8" Apr 17 23:46:40.592500 kubelet[3359]: I0417 23:46:40.592379 3359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84ct\" (UniqueName: \"kubernetes.io/projected/c35b8840-ac49-4f48-82bf-2b42b6d47424-kube-api-access-m84ct\") pod \"whisker-74888fcb5-xlnn8\" (UID: \"c35b8840-ac49-4f48-82bf-2b42b6d47424\") " pod="calico-system/whisker-74888fcb5-xlnn8" Apr 17 23:46:40.843101 containerd[2098]: time="2026-04-17T23:46:40.842981073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74888fcb5-xlnn8,Uid:c35b8840-ac49-4f48-82bf-2b42b6d47424,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:40.850928 (udev-worker)[5137]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:46:40.867591 systemd-networkd[1658]: caliba9c2ae5b73: Link UP Apr 17 23:46:40.869260 systemd-networkd[1658]: caliba9c2ae5b73: Gained carrier Apr 17 23:46:40.926081 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:40.924171 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:40.924200 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.106 [ERROR][4958] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.172 [INFO][4958] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0 coredns-674b8bbfcf- kube-system 32eba8f7-7133-4333-b018-bb3755f88966 923 0 2026-04-17 23:46:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-149 coredns-674b8bbfcf-j45qt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba9c2ae5b73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.172 [INFO][4958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.370 [INFO][5047] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" HandleID="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.494 [INFO][5047] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" HandleID="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-149", "pod":"coredns-674b8bbfcf-j45qt", "timestamp":"2026-04-17 23:46:40.370423059 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057a840)} Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.526 [INFO][5047] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.526 [INFO][5047] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.526 [INFO][5047] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.572 [INFO][5047] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.596 [INFO][5047] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.633 [INFO][5047] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.653 [INFO][5047] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.672 [INFO][5047] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.673 [INFO][5047] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.686 [INFO][5047] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.727 [INFO][5047] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.751 [INFO][5047] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.193/26] block=192.168.97.192/26 handle="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.751 [INFO][5047] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.193/26] handle="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" host="ip-172-31-16-149" Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.752 [INFO][5047] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.031881 containerd[2098]: 2026-04-17 23:46:40.752 [INFO][5047] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.193/26] IPv6=[] ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" HandleID="k8s-pod-network.ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.769 [INFO][4958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"32eba8f7-7133-4333-b018-bb3755f88966", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"coredns-674b8bbfcf-j45qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba9c2ae5b73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.769 [INFO][4958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.193/32] ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.769 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba9c2ae5b73 ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.899 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.901 [INFO][4958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"32eba8f7-7133-4333-b018-bb3755f88966", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb", Pod:"coredns-674b8bbfcf-j45qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba9c2ae5b73", MAC:"7a:0e:ae:b9:66:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.033205 containerd[2098]: 2026-04-17 23:46:40.948 [INFO][4958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-j45qt" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:46:41.147120 systemd-networkd[1658]: cali175bed3475d: Link UP Apr 17 23:46:41.158372 systemd-networkd[1658]: cali175bed3475d: Gained carrier Apr 17 23:46:41.274757 systemd-networkd[1658]: calif6c91cfd500: Link UP Apr 17 23:46:41.277899 systemd-networkd[1658]: calif6c91cfd500: Gained carrier Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.048 [ERROR][4967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.088 [INFO][4967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0 calico-apiserver-64bcf5fd68- calico-system 2ac59079-9564-4c61-aa81-95cd5165fe7e 930 0 2026-04-17 23:46:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64bcf5fd68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-149 calico-apiserver-64bcf5fd68-9j2x7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali175bed3475d [] [] }} ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.089 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.615 [INFO][5026] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" HandleID="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.671 [INFO][5026] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" HandleID="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"calico-apiserver-64bcf5fd68-9j2x7", "timestamp":"2026-04-17 23:46:40.615300687 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000428580)} Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.671 [INFO][5026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.753 [INFO][5026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.753 [INFO][5026] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.806 [INFO][5026] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.874 [INFO][5026] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.913 [INFO][5026] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.920 [INFO][5026] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.934 [INFO][5026] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.934 [INFO][5026] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:40.940 [INFO][5026] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:41.009 [INFO][5026] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:41.038 [INFO][5026] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.194/26] block=192.168.97.192/26 handle="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:41.038 [INFO][5026] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.194/26] handle="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" host="ip-172-31-16-149" Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:41.038 [INFO][5026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.307890 containerd[2098]: 2026-04-17 23:46:41.038 [INFO][5026] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.194/26] IPv6=[] ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" HandleID="k8s-pod-network.86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.099 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"2ac59079-9564-4c61-aa81-95cd5165fe7e", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"calico-apiserver-64bcf5fd68-9j2x7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali175bed3475d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.099 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.194/32] ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.099 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali175bed3475d ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.161 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.163 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"2ac59079-9564-4c61-aa81-95cd5165fe7e", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de", Pod:"calico-apiserver-64bcf5fd68-9j2x7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali175bed3475d", MAC:"82:10:0a:1c:cd:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.314634 containerd[2098]: 2026-04-17 23:46:41.222 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-9j2x7" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.093 [ERROR][4975] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.167 [INFO][4975] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0 calico-kube-controllers-76479b7b8b- calico-system 0c34b680-c1de-441d-83fe-9024cfa08c4f 924 0 2026-04-17 23:46:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76479b7b8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-149 calico-kube-controllers-76479b7b8b-5q868 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6c91cfd500 [] [] }} ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.167 [INFO][4975] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.685 [INFO][5044] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" HandleID="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.737 [INFO][5044] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" HandleID="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123650), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"calico-kube-controllers-76479b7b8b-5q868", "timestamp":"2026-04-17 23:46:40.685102107 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000479080)} Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:40.737 [INFO][5044] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.039 [INFO][5044] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.039 [INFO][5044] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.049 [INFO][5044] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.057 [INFO][5044] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.064 [INFO][5044] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.069 [INFO][5044] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.075 [INFO][5044] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.075 [INFO][5044] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.080 [INFO][5044] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.094 [INFO][5044] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.120 [INFO][5044] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.195/26] block=192.168.97.192/26 handle="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.120 [INFO][5044] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.195/26] handle="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" host="ip-172-31-16-149" Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.120 [INFO][5044] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.375642 containerd[2098]: 2026-04-17 23:46:41.120 [INFO][5044] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.195/26] IPv6=[] ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" HandleID="k8s-pod-network.d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.168 [INFO][4975] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0", GenerateName:"calico-kube-controllers-76479b7b8b-", Namespace:"calico-system", SelfLink:"", UID:"0c34b680-c1de-441d-83fe-9024cfa08c4f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76479b7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"calico-kube-controllers-76479b7b8b-5q868", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6c91cfd500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.169 [INFO][4975] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.195/32] ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.169 [INFO][4975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6c91cfd500 ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.278 [INFO][4975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.287 [INFO][4975] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0", GenerateName:"calico-kube-controllers-76479b7b8b-", Namespace:"calico-system", SelfLink:"", UID:"0c34b680-c1de-441d-83fe-9024cfa08c4f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76479b7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e", Pod:"calico-kube-controllers-76479b7b8b-5q868", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6c91cfd500", MAC:"f6:50:64:91:99:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.377901 containerd[2098]: 2026-04-17 23:46:41.324 [INFO][4975] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e" Namespace="calico-system" Pod="calico-kube-controllers-76479b7b8b-5q868" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:46:41.427155 systemd-networkd[1658]: calib9711373a25: Link UP Apr 17 23:46:41.450570 systemd-networkd[1658]: calib9711373a25: Gained carrier Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:40.325 [ERROR][5000] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:40.433 [INFO][5000] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0 goldmane-5b85766d88- calico-system 28650620-bee7-44f0-92c4-6968b30d2305 926 0 2026-04-17 23:46:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-149 goldmane-5b85766d88-c6h7b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9711373a25 [] [] }} ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:40.433 [INFO][5000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:40.866 [INFO][5089] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" HandleID="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.011 [INFO][5089] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" HandleID="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000693090), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"goldmane-5b85766d88-c6h7b", "timestamp":"2026-04-17 23:46:40.866677891 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057e580)} Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.012 [INFO][5089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.120 [INFO][5089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.122 [INFO][5089] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.163 [INFO][5089] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.220 [INFO][5089] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.273 [INFO][5089] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.279 [INFO][5089] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.282 [INFO][5089] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.283 [INFO][5089] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.285 [INFO][5089] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.301 [INFO][5089] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.329 [INFO][5089] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.196/26] block=192.168.97.192/26 handle="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.329 [INFO][5089] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.196/26] handle="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" host="ip-172-31-16-149" Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.329 [INFO][5089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.550197 containerd[2098]: 2026-04-17 23:46:41.329 [INFO][5089] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.196/26] IPv6=[] ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" HandleID="k8s-pod-network.1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.368 [INFO][5000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28650620-bee7-44f0-92c4-6968b30d2305", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"goldmane-5b85766d88-c6h7b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9711373a25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.376 [INFO][5000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.196/32] ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.376 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9711373a25 ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.485 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.490 [INFO][5000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28650620-bee7-44f0-92c4-6968b30d2305", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda", Pod:"goldmane-5b85766d88-c6h7b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9711373a25", MAC:"26:28:c9:b0:e9:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.551226 containerd[2098]: 2026-04-17 23:46:41.536 [INFO][5000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda" Namespace="calico-system" Pod="goldmane-5b85766d88-c6h7b" WorkloadEndpoint="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:46:41.590557 systemd-networkd[1658]: calia77a608c46a: Link UP Apr 17 23:46:41.593190 systemd-networkd[1658]: calia77a608c46a: Gained carrier Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:40.435 [ERROR][5016] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:40.606 [INFO][5016] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0 coredns-674b8bbfcf- kube-system 4f71864c-b114-4755-a146-e6f57d67291e 925 0 2026-04-17 23:46:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-149 coredns-674b8bbfcf-7ggvd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia77a608c46a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:40.606 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.200 [INFO][5117] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" HandleID="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.247 [INFO][5117] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" HandleID="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b7720), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-149", "pod":"coredns-674b8bbfcf-7ggvd", "timestamp":"2026-04-17 23:46:41.200547882 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001a9600)} Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.247 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.335 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.335 [INFO][5117] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.358 [INFO][5117] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.423 [INFO][5117] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.485 [INFO][5117] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.495 [INFO][5117] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.517 [INFO][5117] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.517 [INFO][5117] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.532 [INFO][5117] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120 Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.547 [INFO][5117] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.572 [INFO][5117] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.197/26] block=192.168.97.192/26 handle="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.573 [INFO][5117] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.197/26] handle="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" host="ip-172-31-16-149" Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.573 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.624867 containerd[2098]: 2026-04-17 23:46:41.573 [INFO][5117] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.197/26] IPv6=[] ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" HandleID="k8s-pod-network.a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.578 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f71864c-b114-4755-a146-e6f57d67291e", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"coredns-674b8bbfcf-7ggvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia77a608c46a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.578 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.197/32] ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.578 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia77a608c46a ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.584 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.584 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f71864c-b114-4755-a146-e6f57d67291e", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120", Pod:"coredns-674b8bbfcf-7ggvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia77a608c46a", MAC:"7e:c6:52:05:1d:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.627268 containerd[2098]: 2026-04-17 23:46:41.610 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120" Namespace="kube-system" Pod="coredns-674b8bbfcf-7ggvd" WorkloadEndpoint="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:46:41.775417 containerd[2098]: time="2026-04-17T23:46:41.775282802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:41.776083 containerd[2098]: time="2026-04-17T23:46:41.776039620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:41.776246 containerd[2098]: time="2026-04-17T23:46:41.776220311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.780356 containerd[2098]: time="2026-04-17T23:46:41.780261193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.816020 systemd-networkd[1658]: calid82e19c919c: Link UP Apr 17 23:46:41.824272 systemd-networkd[1658]: calid82e19c919c: Gained carrier Apr 17 23:46:41.880506 containerd[2098]: time="2026-04-17T23:46:41.877500985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:41.880506 containerd[2098]: time="2026-04-17T23:46:41.877585827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:41.880506 containerd[2098]: time="2026-04-17T23:46:41.877608647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.886928 containerd[2098]: time="2026-04-17T23:46:41.881557560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.907835 containerd[2098]: time="2026-04-17T23:46:41.906913141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:41.907835 containerd[2098]: time="2026-04-17T23:46:41.906985399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:41.907835 containerd[2098]: time="2026-04-17T23:46:41.907017236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.907835 containerd[2098]: time="2026-04-17T23:46:41.907129660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.913105 containerd[2098]: time="2026-04-17T23:46:41.912465306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:41.913105 containerd[2098]: time="2026-04-17T23:46:41.912542022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:41.913105 containerd[2098]: time="2026-04-17T23:46:41.912583208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.913105 containerd[2098]: time="2026-04-17T23:46:41.912723099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:40.497 [ERROR][5015] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:40.605 [INFO][5015] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0 calico-apiserver-64bcf5fd68- calico-system 3bfd7323-fa86-459b-911b-3e898630bb72 927 0 2026-04-17 23:46:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64bcf5fd68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-149 calico-apiserver-64bcf5fd68-fg4l8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid82e19c919c [] [] }} ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:40.605 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.164 [INFO][5119] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" HandleID="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.298 [INFO][5119] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" HandleID="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036faf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"calico-apiserver-64bcf5fd68-fg4l8", "timestamp":"2026-04-17 23:46:41.164976672 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.298 [INFO][5119] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.601 [INFO][5119] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.601 [INFO][5119] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.626 [INFO][5119] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.642 [INFO][5119] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.655 [INFO][5119] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.659 [INFO][5119] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.671 [INFO][5119] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.671 [INFO][5119] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.681 [INFO][5119] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1 Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.707 [INFO][5119] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.727 [INFO][5119] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.198/26] block=192.168.97.192/26 handle="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.730 [INFO][5119] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.198/26] handle="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" host="ip-172-31-16-149" Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.730 [INFO][5119] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:41.913658 containerd[2098]: 2026-04-17 23:46:41.731 [INFO][5119] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.198/26] IPv6=[] ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" HandleID="k8s-pod-network.bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.772 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"3bfd7323-fa86-459b-911b-3e898630bb72", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"calico-apiserver-64bcf5fd68-fg4l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid82e19c919c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.772 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.198/32] ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.772 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid82e19c919c ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.828 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.839 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"3bfd7323-fa86-459b-911b-3e898630bb72", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1", Pod:"calico-apiserver-64bcf5fd68-fg4l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid82e19c919c", MAC:"16:88:51:8d:dc:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:41.914654 containerd[2098]: 2026-04-17 23:46:41.877 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1" Namespace="calico-system" Pod="calico-apiserver-64bcf5fd68-fg4l8" WorkloadEndpoint="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:46:41.925104 systemd-networkd[1658]: cali22d979f851a: Link UP Apr 17 23:46:41.928164 systemd-networkd[1658]: cali22d979f851a: Gained carrier Apr 17 23:46:41.970673 systemd-networkd[1658]: caliba9c2ae5b73: Gained IPv6LL Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:40.201 [ERROR][4986] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:40.287 [INFO][4986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0 csi-node-driver- calico-system 15f67ed1-2981-42fd-8b37-94a71c9f9349 928 0 2026-04-17 23:46:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-149 csi-node-driver-ch9lf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22d979f851a [] [] }} ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:40.287 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.269 [INFO][5075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" HandleID="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.328 [INFO][5075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" HandleID="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103f70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"csi-node-driver-ch9lf", "timestamp":"2026-04-17 23:46:41.269701446 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000537340)} Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.329 [INFO][5075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.730 [INFO][5075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.730 [INFO][5075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.745 [INFO][5075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.770 [INFO][5075] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.791 [INFO][5075] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.806 [INFO][5075] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.825 [INFO][5075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.825 [INFO][5075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.829 [INFO][5075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.838 [INFO][5075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.855 [INFO][5075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.199/26] block=192.168.97.192/26 handle="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.855 [INFO][5075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.199/26] handle="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" host="ip-172-31-16-149" Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.862 [INFO][5075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:42.067282 containerd[2098]: 2026-04-17 23:46:41.862 [INFO][5075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.199/26] IPv6=[] ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" HandleID="k8s-pod-network.82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.915 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15f67ed1-2981-42fd-8b37-94a71c9f9349", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"csi-node-driver-ch9lf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d979f851a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.916 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.199/32] ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.918 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22d979f851a ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.935 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.936 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15f67ed1-2981-42fd-8b37-94a71c9f9349", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d", Pod:"csi-node-driver-ch9lf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d979f851a", MAC:"56:7e:8c:6e:17:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:42.073841 containerd[2098]: 2026-04-17 23:46:41.974 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d" Namespace="calico-system" Pod="csi-node-driver-ch9lf" WorkloadEndpoint="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:46:42.091334 containerd[2098]: time="2026-04-17T23:46:42.086090040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:42.091334 containerd[2098]: time="2026-04-17T23:46:42.090466085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:42.091334 containerd[2098]: time="2026-04-17T23:46:42.090888758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.091569 containerd[2098]: time="2026-04-17T23:46:42.091077189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.465334830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.465518247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.465658152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.466398354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.466453536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.466470534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.466800 containerd[2098]: time="2026-04-17T23:46:42.466569767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.468682 containerd[2098]: time="2026-04-17T23:46:42.466890267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.477937 containerd[2098]: time="2026-04-17T23:46:42.477883151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j45qt,Uid:32eba8f7-7133-4333-b018-bb3755f88966,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb\"" Apr 17 23:46:42.503705 containerd[2098]: time="2026-04-17T23:46:42.503504328Z" level=info msg="CreateContainer within sandbox \"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:46:42.516753 systemd-networkd[1658]: cali2a5ff2481d0: Link UP Apr 17 23:46:42.520496 systemd-networkd[1658]: cali2a5ff2481d0: Gained carrier Apr 17 23:46:42.583365 containerd[2098]: time="2026-04-17T23:46:42.583303826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7ggvd,Uid:4f71864c-b114-4755-a146-e6f57d67291e,Namespace:kube-system,Attempt:1,} returns sandbox id \"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120\"" Apr 17 23:46:42.592705 kernel: calico-node[5197]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:46:42.597532 containerd[2098]: time="2026-04-17T23:46:42.597437532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-9j2x7,Uid:2ac59079-9564-4c61-aa81-95cd5165fe7e,Namespace:calico-system,Attempt:1,} returns sandbox id \"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de\"" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:41.516 [ERROR][5163] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:41.609 [INFO][5163] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0 whisker-74888fcb5- calico-system c35b8840-ac49-4f48-82bf-2b42b6d47424 946 0 2026-04-17 23:46:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74888fcb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-149 whisker-74888fcb5-xlnn8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2a5ff2481d0 [] [] }} ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:41.609 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.207 [INFO][5259] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" HandleID="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Workload="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.302 [INFO][5259] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" HandleID="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Workload="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103c00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-149", "pod":"whisker-74888fcb5-xlnn8", "timestamp":"2026-04-17 23:46:42.207613749 +0000 UTC"}, Hostname:"ip-172-31-16-149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000d0580)} Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.302 [INFO][5259] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.302 [INFO][5259] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.302 [INFO][5259] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-149' Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.310 [INFO][5259] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.343 [INFO][5259] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.367 [INFO][5259] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.374 [INFO][5259] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.402 [INFO][5259] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.402 [INFO][5259] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.413 [INFO][5259] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.428 [INFO][5259] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.457 [INFO][5259] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.200/26] block=192.168.97.192/26 handle="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.457 [INFO][5259] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.200/26] handle="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" host="ip-172-31-16-149" Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.457 [INFO][5259] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:46:42.601953 containerd[2098]: 2026-04-17 23:46:42.457 [INFO][5259] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.200/26] IPv6=[] ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" HandleID="k8s-pod-network.5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Workload="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.513 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0", GenerateName:"whisker-74888fcb5-", Namespace:"calico-system", SelfLink:"", UID:"c35b8840-ac49-4f48-82bf-2b42b6d47424", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74888fcb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"", Pod:"whisker-74888fcb5-xlnn8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a5ff2481d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.513 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.200/32] ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.513 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a5ff2481d0 ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.517 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.545 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0", GenerateName:"whisker-74888fcb5-", Namespace:"calico-system", SelfLink:"", UID:"c35b8840-ac49-4f48-82bf-2b42b6d47424", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74888fcb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea", Pod:"whisker-74888fcb5-xlnn8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a5ff2481d0", MAC:"7e:a4:45:69:5c:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:46:42.605280 containerd[2098]: 2026-04-17 23:46:42.570 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea" Namespace="calico-system" Pod="whisker-74888fcb5-xlnn8" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--74888fcb5--xlnn8-eth0" Apr 17 23:46:42.611627 containerd[2098]: time="2026-04-17T23:46:42.611571757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76479b7b8b-5q868,Uid:0c34b680-c1de-441d-83fe-9024cfa08c4f,Namespace:calico-system,Attempt:1,} returns sandbox id \"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e\"" Apr 17 23:46:42.627317 containerd[2098]: time="2026-04-17T23:46:42.627277527Z" level=info msg="CreateContainer within sandbox \"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:46:42.627713 containerd[2098]: time="2026-04-17T23:46:42.627675170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:46:42.660249 systemd-networkd[1658]: calif6c91cfd500: Gained IPv6LL Apr 17 23:46:42.720241 containerd[2098]: time="2026-04-17T23:46:42.715648745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6h7b,Uid:28650620-bee7-44f0-92c4-6968b30d2305,Namespace:calico-system,Attempt:1,} returns sandbox id \"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda\"" Apr 17 23:46:42.716256 systemd-networkd[1658]: calib9711373a25: Gained IPv6LL Apr 17 23:46:42.716607 systemd-networkd[1658]: calia77a608c46a: Gained IPv6LL Apr 17 23:46:42.875655 containerd[2098]: time="2026-04-17T23:46:42.875591964Z" level=info msg="CreateContainer within sandbox \"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83c402bc09c2ce8278e469272e233fc4780c38b05e1a76cefbbbe8ce9a86beb2\"" Apr 17 23:46:42.880225 containerd[2098]: time="2026-04-17T23:46:42.880110632Z" level=info msg="StartContainer for \"83c402bc09c2ce8278e469272e233fc4780c38b05e1a76cefbbbe8ce9a86beb2\"" Apr 17 23:46:42.887076 containerd[2098]: time="2026-04-17T23:46:42.866443740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:42.887076 containerd[2098]: time="2026-04-17T23:46:42.869673765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:42.887076 containerd[2098]: time="2026-04-17T23:46:42.869696706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.887076 containerd[2098]: time="2026-04-17T23:46:42.869810788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:42.892064 containerd[2098]: time="2026-04-17T23:46:42.891198025Z" level=info msg="CreateContainer within sandbox \"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"882f103def8357cec5071cd95a8b7cc807f7e689969cafeed8f49ae3463607fb\"" Apr 17 23:46:42.923883 containerd[2098]: time="2026-04-17T23:46:42.923684883Z" level=info msg="StartContainer for \"882f103def8357cec5071cd95a8b7cc807f7e689969cafeed8f49ae3463607fb\"" Apr 17 23:46:42.975255 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:42.972329 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:42.972365 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:43.103198 systemd-networkd[1658]: cali175bed3475d: Gained IPv6LL Apr 17 23:46:43.184120 containerd[2098]: time="2026-04-17T23:46:43.183534350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch9lf,Uid:15f67ed1-2981-42fd-8b37-94a71c9f9349,Namespace:calico-system,Attempt:1,} returns sandbox id \"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d\"" Apr 17 23:46:43.280030 containerd[2098]: time="2026-04-17T23:46:43.279178309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64bcf5fd68-fg4l8,Uid:3bfd7323-fa86-459b-911b-3e898630bb72,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1\"" Apr 17 23:46:43.423067 systemd-networkd[1658]: calid82e19c919c: Gained IPv6LL Apr 17 23:46:43.543390 containerd[2098]: time="2026-04-17T23:46:43.542963815Z" level=info msg="StartContainer for \"83c402bc09c2ce8278e469272e233fc4780c38b05e1a76cefbbbe8ce9a86beb2\" returns successfully" Apr 17 23:46:43.600972 containerd[2098]: time="2026-04-17T23:46:43.600926746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74888fcb5-xlnn8,Uid:c35b8840-ac49-4f48-82bf-2b42b6d47424,Namespace:calico-system,Attempt:0,} returns sandbox id \"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea\"" Apr 17 23:46:43.605042 containerd[2098]: time="2026-04-17T23:46:43.604684234Z" level=info msg="StartContainer for \"882f103def8357cec5071cd95a8b7cc807f7e689969cafeed8f49ae3463607fb\" returns successfully" Apr 17 23:46:43.612196 systemd-networkd[1658]: cali22d979f851a: Gained IPv6LL Apr 17 23:46:44.404425 systemd-networkd[1658]: vxlan.calico: Link UP Apr 17 23:46:44.404434 systemd-networkd[1658]: vxlan.calico: Gained carrier Apr 17 23:46:44.428478 (udev-worker)[5763]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:46:44.573260 systemd-networkd[1658]: cali2a5ff2481d0: Gained IPv6LL Apr 17 23:46:44.755085 kubelet[3359]: I0417 23:46:44.751676 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7ggvd" podStartSLOduration=40.743497726 podStartE2EDuration="40.743497726s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:44.727868823 +0000 UTC m=+44.365056502" watchObservedRunningTime="2026-04-17 23:46:44.743497726 +0000 UTC m=+44.380685406" Apr 17 23:46:44.758631 kubelet[3359]: I0417 23:46:44.758194 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j45qt" podStartSLOduration=40.7581762 podStartE2EDuration="40.7581762s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:44.757881607 +0000 UTC m=+44.395069280" watchObservedRunningTime="2026-04-17 23:46:44.7581762 +0000 UTC m=+44.395363871" Apr 17 23:46:45.027611 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:45.020101 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:45.020126 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:46.046340 systemd-networkd[1658]: vxlan.calico: Gained IPv6LL Apr 17 23:46:47.439077 containerd[2098]: time="2026-04-17T23:46:47.438779731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:46:47.454321 containerd[2098]: time="2026-04-17T23:46:47.453802344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.823247486s" Apr 17 23:46:47.454321 containerd[2098]: time="2026-04-17T23:46:47.453855520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:46:47.460963 containerd[2098]: time="2026-04-17T23:46:47.459233290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:46:47.460963 containerd[2098]: time="2026-04-17T23:46:47.460613538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:47.463650 containerd[2098]: time="2026-04-17T23:46:47.463522700Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:47.464770 containerd[2098]: time="2026-04-17T23:46:47.464739493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:47.473510 containerd[2098]: time="2026-04-17T23:46:47.473466766Z" level=info msg="CreateContainer within sandbox \"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:46:47.511404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349038561.mount: Deactivated successfully. Apr 17 23:46:47.533664 containerd[2098]: time="2026-04-17T23:46:47.533610496Z" level=info msg="CreateContainer within sandbox \"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8f32d16fb9ccbc54150bd71f675daf38af7cf91a17998643909a3ec48188267\"" Apr 17 23:46:47.534372 containerd[2098]: time="2026-04-17T23:46:47.534258127Z" level=info msg="StartContainer for \"c8f32d16fb9ccbc54150bd71f675daf38af7cf91a17998643909a3ec48188267\"" Apr 17 23:46:47.630028 containerd[2098]: time="2026-04-17T23:46:47.629966084Z" level=info msg="StartContainer for \"c8f32d16fb9ccbc54150bd71f675daf38af7cf91a17998643909a3ec48188267\" returns successfully" Apr 17 23:46:48.684423 kubelet[3359]: I0417 23:46:48.684352 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64bcf5fd68-9j2x7" podStartSLOduration=26.83646669 podStartE2EDuration="31.684330164s" podCreationTimestamp="2026-04-17 23:46:17 +0000 UTC" firstStartedPulling="2026-04-17 23:46:42.611088899 +0000 UTC m=+42.248276570" lastFinishedPulling="2026-04-17 23:46:47.458952367 +0000 UTC m=+47.096140044" observedRunningTime="2026-04-17 23:46:48.683910043 +0000 UTC m=+48.321097722" watchObservedRunningTime="2026-04-17 23:46:48.684330164 +0000 UTC m=+48.321517842" Apr 17 23:46:48.819709 ntpd[2052]: Listen normally on 6 vxlan.calico 192.168.97.192:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 6 vxlan.calico 192.168.97.192:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 7 caliba9c2ae5b73 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 8 cali175bed3475d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 9 calif6c91cfd500 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 10 calib9711373a25 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 11 calia77a608c46a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 12 calid82e19c919c [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 13 cali22d979f851a [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 14 cali2a5ff2481d0 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:46:48.828297 ntpd[2052]: 17 Apr 23:46:48 ntpd[2052]: Listen normally on 15 vxlan.calico [fe80::6473:edff:fe67:32cf%12]:123 Apr 17 23:46:48.819842 ntpd[2052]: Listen normally on 7 caliba9c2ae5b73 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:46:48.819903 ntpd[2052]: Listen normally on 8 cali175bed3475d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:46:48.819946 ntpd[2052]: Listen normally on 9 calif6c91cfd500 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:46:48.819989 ntpd[2052]: Listen normally on 10 calib9711373a25 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:46:48.820054 ntpd[2052]: Listen normally on 11 calia77a608c46a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:46:48.820094 ntpd[2052]: Listen normally on 12 calid82e19c919c [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:46:48.820134 ntpd[2052]: Listen normally on 13 cali22d979f851a [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:46:48.820173 ntpd[2052]: Listen normally on 14 cali2a5ff2481d0 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:46:48.820213 ntpd[2052]: Listen normally on 15 vxlan.calico [fe80::6473:edff:fe67:32cf%12]:123 Apr 17 23:46:49.628638 kubelet[3359]: I0417 23:46:49.628598 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:46:50.776454 containerd[2098]: time="2026-04-17T23:46:50.776395003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:50.778431 containerd[2098]: time="2026-04-17T23:46:50.778325864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:46:50.782475 containerd[2098]: time="2026-04-17T23:46:50.781062070Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:50.788053 containerd[2098]: time="2026-04-17T23:46:50.787983841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:50.789068 containerd[2098]: time="2026-04-17T23:46:50.789031793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.329731869s" Apr 17 23:46:50.789217 containerd[2098]: time="2026-04-17T23:46:50.789197291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:46:50.790644 containerd[2098]: time="2026-04-17T23:46:50.790622136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:46:50.910338 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:50.908091 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:50.908131 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:50.975889 containerd[2098]: time="2026-04-17T23:46:50.975851135Z" level=info msg="CreateContainer within sandbox \"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:46:51.003202 containerd[2098]: time="2026-04-17T23:46:51.003153981Z" level=info msg="CreateContainer within sandbox \"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bf66e22b392e789002858b1d075a405d3067dfe455e29be97c6f1b72403fa442\"" Apr 17 23:46:51.003942 containerd[2098]: time="2026-04-17T23:46:51.003909210Z" level=info msg="StartContainer for \"bf66e22b392e789002858b1d075a405d3067dfe455e29be97c6f1b72403fa442\"" Apr 17 23:46:51.198617 containerd[2098]: time="2026-04-17T23:46:51.198389875Z" level=info msg="StartContainer for \"bf66e22b392e789002858b1d075a405d3067dfe455e29be97c6f1b72403fa442\" returns successfully" Apr 17 23:46:51.858406 kubelet[3359]: I0417 23:46:51.858332 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76479b7b8b-5q868" podStartSLOduration=24.697524945 podStartE2EDuration="32.858301403s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:46:42.629512793 +0000 UTC m=+42.266700473" lastFinishedPulling="2026-04-17 23:46:50.790289255 +0000 UTC m=+50.427476931" observedRunningTime="2026-04-17 23:46:51.711086229 +0000 UTC m=+51.348273910" watchObservedRunningTime="2026-04-17 23:46:51.858301403 +0000 UTC m=+51.495489084" Apr 17 23:46:52.956084 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:52.958396 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:52.956131 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:53.605586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559249265.mount: Deactivated successfully. Apr 17 23:46:54.239628 containerd[2098]: time="2026-04-17T23:46:54.239561809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:54.241548 containerd[2098]: time="2026-04-17T23:46:54.241430490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:46:54.244261 containerd[2098]: time="2026-04-17T23:46:54.243735722Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:54.247887 containerd[2098]: time="2026-04-17T23:46:54.247823102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:54.249268 containerd[2098]: time="2026-04-17T23:46:54.249232724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.45826783s" Apr 17 23:46:54.249268 containerd[2098]: time="2026-04-17T23:46:54.249271858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:46:54.251175 containerd[2098]: time="2026-04-17T23:46:54.250724111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:46:54.285912 containerd[2098]: time="2026-04-17T23:46:54.285822359Z" level=info msg="CreateContainer within sandbox \"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:46:54.315832 containerd[2098]: time="2026-04-17T23:46:54.315780281Z" level=info msg="CreateContainer within sandbox \"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"96ecae4a6b7d62dad8f3ac6e460108a3f1dc9104b908fbf4b10b490fad9c2f04\"" Apr 17 23:46:54.328638 containerd[2098]: time="2026-04-17T23:46:54.326469083Z" level=info msg="StartContainer for \"96ecae4a6b7d62dad8f3ac6e460108a3f1dc9104b908fbf4b10b490fad9c2f04\"" Apr 17 23:46:54.580888 containerd[2098]: time="2026-04-17T23:46:54.580359258Z" level=info msg="StartContainer for \"96ecae4a6b7d62dad8f3ac6e460108a3f1dc9104b908fbf4b10b490fad9c2f04\" returns successfully" Apr 17 23:46:55.004268 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:55.006257 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:55.004292 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:55.744799 systemd[1]: run-containerd-runc-k8s.io-96ecae4a6b7d62dad8f3ac6e460108a3f1dc9104b908fbf4b10b490fad9c2f04-runc.IdVLEw.mount: Deactivated successfully. Apr 17 23:46:56.037410 systemd[1]: Started sshd@7-172.31.16.149:22-20.229.252.112:47474.service - OpenSSH per-connection server daemon (20.229.252.112:47474). Apr 17 23:46:56.162269 containerd[2098]: time="2026-04-17T23:46:56.162203461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:56.167098 containerd[2098]: time="2026-04-17T23:46:56.166992937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:46:56.205341 containerd[2098]: time="2026-04-17T23:46:56.205272214Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:56.219404 containerd[2098]: time="2026-04-17T23:46:56.219324430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:56.220531 containerd[2098]: time="2026-04-17T23:46:56.220484294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.969729103s" Apr 17 23:46:56.220923 containerd[2098]: time="2026-04-17T23:46:56.220892297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:46:56.222293 containerd[2098]: time="2026-04-17T23:46:56.222240244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:46:56.240385 containerd[2098]: time="2026-04-17T23:46:56.240330453Z" level=info msg="CreateContainer within sandbox \"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:46:56.309602 containerd[2098]: time="2026-04-17T23:46:56.309487518Z" level=info msg="CreateContainer within sandbox \"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"754a9546a113d95b8cdbe7a7719cb3bd030b4a7f5f4210c21453ae2bc0426352\"" Apr 17 23:46:56.310621 containerd[2098]: time="2026-04-17T23:46:56.310518635Z" level=info msg="StartContainer for \"754a9546a113d95b8cdbe7a7719cb3bd030b4a7f5f4210c21453ae2bc0426352\"" Apr 17 23:46:56.418599 containerd[2098]: time="2026-04-17T23:46:56.418555294Z" level=info msg="StartContainer for \"754a9546a113d95b8cdbe7a7719cb3bd030b4a7f5f4210c21453ae2bc0426352\" returns successfully" Apr 17 23:46:56.551278 containerd[2098]: time="2026-04-17T23:46:56.551233400Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:56.553301 containerd[2098]: time="2026-04-17T23:46:56.553156982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:46:56.564265 containerd[2098]: time="2026-04-17T23:46:56.563890383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 341.577123ms" Apr 17 23:46:56.564265 containerd[2098]: time="2026-04-17T23:46:56.563932221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:46:56.565311 containerd[2098]: time="2026-04-17T23:46:56.565094000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:46:56.573030 containerd[2098]: time="2026-04-17T23:46:56.572977402Z" level=info msg="CreateContainer within sandbox \"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:46:56.603652 containerd[2098]: time="2026-04-17T23:46:56.603608677Z" level=info msg="CreateContainer within sandbox \"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9ff96c37e0d217c4217b74dd4e6219d79a54c81aab80da6884f65972b18ad319\"" Apr 17 23:46:56.606251 containerd[2098]: time="2026-04-17T23:46:56.605384498Z" level=info msg="StartContainer for \"9ff96c37e0d217c4217b74dd4e6219d79a54c81aab80da6884f65972b18ad319\"" Apr 17 23:46:56.716988 containerd[2098]: time="2026-04-17T23:46:56.716054704Z" level=info msg="StartContainer for \"9ff96c37e0d217c4217b74dd4e6219d79a54c81aab80da6884f65972b18ad319\" returns successfully" Apr 17 23:46:57.055188 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:57.052247 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:57.052284 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:57.151844 sshd[6096]: Accepted publickey for core from 20.229.252.112 port 47474 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:57.156979 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:57.199401 systemd-logind[2068]: New session 8 of user core. Apr 17 23:46:57.208347 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:46:58.021044 kubelet[3359]: I0417 23:46:57.914797 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-c6h7b" podStartSLOduration=29.380831547 podStartE2EDuration="40.858053878s" podCreationTimestamp="2026-04-17 23:46:17 +0000 UTC" firstStartedPulling="2026-04-17 23:46:42.773202141 +0000 UTC m=+42.410389812" lastFinishedPulling="2026-04-17 23:46:54.250424485 +0000 UTC m=+53.887612143" observedRunningTime="2026-04-17 23:46:54.715158568 +0000 UTC m=+54.352346247" watchObservedRunningTime="2026-04-17 23:46:57.858053878 +0000 UTC m=+57.495241552" Apr 17 23:46:58.021044 kubelet[3359]: I0417 23:46:58.019730 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64bcf5fd68-fg4l8" podStartSLOduration=27.985332683 podStartE2EDuration="41.019706055s" podCreationTimestamp="2026-04-17 23:46:17 +0000 UTC" firstStartedPulling="2026-04-17 23:46:43.530476872 +0000 UTC m=+43.167664540" lastFinishedPulling="2026-04-17 23:46:56.564850232 +0000 UTC m=+56.202037912" observedRunningTime="2026-04-17 23:46:57.800411604 +0000 UTC m=+57.437599283" watchObservedRunningTime="2026-04-17 23:46:58.019706055 +0000 UTC m=+57.656893734" Apr 17 23:46:58.552390 containerd[2098]: time="2026-04-17T23:46:58.552340492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:58.558728 containerd[2098]: time="2026-04-17T23:46:58.558671246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:46:58.560842 containerd[2098]: time="2026-04-17T23:46:58.560801069Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:58.576126 containerd[2098]: time="2026-04-17T23:46:58.576059587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:58.579268 containerd[2098]: time="2026-04-17T23:46:58.577898625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.012751757s" Apr 17 23:46:58.579268 containerd[2098]: time="2026-04-17T23:46:58.577947830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:46:58.637064 containerd[2098]: time="2026-04-17T23:46:58.636996120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:46:58.987070 containerd[2098]: time="2026-04-17T23:46:58.986228153Z" level=info msg="CreateContainer within sandbox \"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:46:59.069632 containerd[2098]: time="2026-04-17T23:46:59.069406636Z" level=info msg="CreateContainer within sandbox \"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"144a42a52a7faf5c65f9d3257fd04f828356480c4e242ada661576b47141f4db\"" Apr 17 23:46:59.105792 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:46:59.100386 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:46:59.100423 systemd-resolved[1997]: Flushed all caches. Apr 17 23:46:59.134026 containerd[2098]: time="2026-04-17T23:46:59.132966803Z" level=info msg="StartContainer for \"144a42a52a7faf5c65f9d3257fd04f828356480c4e242ada661576b47141f4db\"" Apr 17 23:46:59.184947 sshd[6096]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:59.208722 systemd[1]: sshd@7-172.31.16.149:22-20.229.252.112:47474.service: Deactivated successfully. Apr 17 23:46:59.239383 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:46:59.251961 systemd-logind[2068]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:46:59.302076 systemd-logind[2068]: Removed session 8. Apr 17 23:46:59.558169 systemd[1]: run-containerd-runc-k8s.io-144a42a52a7faf5c65f9d3257fd04f828356480c4e242ada661576b47141f4db-runc.CO7Jmu.mount: Deactivated successfully. Apr 17 23:46:59.646858 containerd[2098]: time="2026-04-17T23:46:59.646319949Z" level=info msg="StartContainer for \"144a42a52a7faf5c65f9d3257fd04f828356480c4e242ada661576b47141f4db\" returns successfully" Apr 17 23:47:00.863463 containerd[2098]: time="2026-04-17T23:47:00.862455808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:00.867317 containerd[2098]: time="2026-04-17T23:47:00.867254910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:47:00.874389 containerd[2098]: time="2026-04-17T23:47:00.873058975Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:00.879473 containerd[2098]: time="2026-04-17T23:47:00.879434564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:00.887070 containerd[2098]: time="2026-04-17T23:47:00.886158300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.248845478s" Apr 17 23:47:00.887070 containerd[2098]: time="2026-04-17T23:47:00.886218360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:47:01.095919 containerd[2098]: time="2026-04-17T23:47:01.095330291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:47:01.117553 containerd[2098]: time="2026-04-17T23:47:01.117106479Z" level=info msg="StopPodSandbox for \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\"" Apr 17 23:47:01.147030 containerd[2098]: time="2026-04-17T23:47:01.146302666Z" level=info msg="CreateContainer within sandbox \"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:47:01.156527 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:01.148076 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:01.148114 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:01.253171 containerd[2098]: time="2026-04-17T23:47:01.253110874Z" level=info msg="CreateContainer within sandbox \"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cccfa0c6a0c43f78c73334a52441438cf9212fb067679ef151fe904cee9e4d98\"" Apr 17 23:47:01.255182 containerd[2098]: time="2026-04-17T23:47:01.255060219Z" level=info msg="StartContainer for \"cccfa0c6a0c43f78c73334a52441438cf9212fb067679ef151fe904cee9e4d98\"" Apr 17 23:47:02.838461 containerd[2098]: time="2026-04-17T23:47:02.837831619Z" level=info msg="StartContainer for \"cccfa0c6a0c43f78c73334a52441438cf9212fb067679ef151fe904cee9e4d98\" returns successfully" Apr 17 23:47:03.203074 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:03.196155 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:03.198125 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:03.435506 kubelet[3359]: I0417 23:47:03.432033 3359 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:47:03.444465 kubelet[3359]: I0417 23:47:03.443221 3359 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.264 [WARNING][6279] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"32eba8f7-7133-4333-b018-bb3755f88966", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb", Pod:"coredns-674b8bbfcf-j45qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba9c2ae5b73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.272 [INFO][6279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.272 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" iface="eth0" netns="" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.272 [INFO][6279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.272 [INFO][6279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.755 [INFO][6309] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.759 [INFO][6309] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.760 [INFO][6309] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.776 [WARNING][6309] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.776 [INFO][6309] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.778 [INFO][6309] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:03.782925 containerd[2098]: 2026-04-17 23:47:03.780 [INFO][6279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:03.791459 containerd[2098]: time="2026-04-17T23:47:03.782987140Z" level=info msg="TearDown network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" successfully" Apr 17 23:47:03.791459 containerd[2098]: time="2026-04-17T23:47:03.783039562Z" level=info msg="StopPodSandbox for \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" returns successfully" Apr 17 23:47:03.878157 containerd[2098]: time="2026-04-17T23:47:03.878104449Z" level=info msg="RemovePodSandbox for \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\"" Apr 17 23:47:03.889499 containerd[2098]: time="2026-04-17T23:47:03.889445091Z" level=info msg="Forcibly stopping sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\"" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:03.956 [WARNING][6325] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"32eba8f7-7133-4333-b018-bb3755f88966", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"ff06e06e58c132f8475a3fc1efd8ba50032181d1306573050d3e27066f4323bb", Pod:"coredns-674b8bbfcf-j45qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba9c2ae5b73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:03.958 [INFO][6325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:03.958 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" iface="eth0" netns="" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:03.958 [INFO][6325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:03.958 [INFO][6325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.063 [INFO][6333] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.069 [INFO][6333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.075 [INFO][6333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.124 [WARNING][6333] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.124 [INFO][6333] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" HandleID="k8s-pod-network.f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--j45qt-eth0" Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.131 [INFO][6333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:04.161453 containerd[2098]: 2026-04-17 23:47:04.142 [INFO][6325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269" Apr 17 23:47:04.161453 containerd[2098]: time="2026-04-17T23:47:04.161025941Z" level=info msg="TearDown network for sandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" successfully" Apr 17 23:47:04.220459 kubelet[3359]: I0417 23:47:04.220377 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ch9lf" podStartSLOduration=28.491332545 podStartE2EDuration="46.159637841s" podCreationTimestamp="2026-04-17 23:46:18 +0000 UTC" firstStartedPulling="2026-04-17 23:46:43.361071187 +0000 UTC m=+42.998258858" lastFinishedPulling="2026-04-17 23:47:01.029376495 +0000 UTC m=+60.666564154" observedRunningTime="2026-04-17 23:47:04.129339585 +0000 UTC m=+63.766527264" watchObservedRunningTime="2026-04-17 23:47:04.159637841 +0000 UTC m=+63.796825520" Apr 17 23:47:04.230956 containerd[2098]: time="2026-04-17T23:47:04.230898327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:04.276371 containerd[2098]: time="2026-04-17T23:47:04.276315439Z" level=info msg="RemovePodSandbox \"f834e5cb29ffa1b835578f9ac6c1b59ee293deea265980f9b86d122a5137c269\" returns successfully" Apr 17 23:47:04.296858 containerd[2098]: time="2026-04-17T23:47:04.296810007Z" level=info msg="StopPodSandbox for \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\"" Apr 17 23:47:04.387414 systemd[1]: Started sshd@8-172.31.16.149:22-20.229.252.112:47476.service - OpenSSH per-connection server daemon (20.229.252.112:47476). Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.570 [WARNING][6349] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0", GenerateName:"calico-kube-controllers-76479b7b8b-", Namespace:"calico-system", SelfLink:"", UID:"0c34b680-c1de-441d-83fe-9024cfa08c4f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76479b7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e", Pod:"calico-kube-controllers-76479b7b8b-5q868", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6c91cfd500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.573 [INFO][6349] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.576 [INFO][6349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" iface="eth0" netns="" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.576 [INFO][6349] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.576 [INFO][6349] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.709 [INFO][6365] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.713 [INFO][6365] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.713 [INFO][6365] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.729 [WARNING][6365] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.731 [INFO][6365] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.733 [INFO][6365] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:04.745404 containerd[2098]: 2026-04-17 23:47:04.741 [INFO][6349] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.745404 containerd[2098]: time="2026-04-17T23:47:04.745390985Z" level=info msg="TearDown network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" successfully" Apr 17 23:47:04.751762 containerd[2098]: time="2026-04-17T23:47:04.745429470Z" level=info msg="StopPodSandbox for \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" returns successfully" Apr 17 23:47:04.751762 containerd[2098]: time="2026-04-17T23:47:04.746088644Z" level=info msg="RemovePodSandbox for \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\"" Apr 17 23:47:04.751762 containerd[2098]: time="2026-04-17T23:47:04.746123900Z" level=info msg="Forcibly stopping sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\"" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.812 [WARNING][6379] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0", GenerateName:"calico-kube-controllers-76479b7b8b-", Namespace:"calico-system", SelfLink:"", UID:"0c34b680-c1de-441d-83fe-9024cfa08c4f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76479b7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"d43c23da2ade1cd9f9052a5786c9d47610391ee308ff3f11a50b5da86363892e", Pod:"calico-kube-controllers-76479b7b8b-5q868", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6c91cfd500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.813 [INFO][6379] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.813 [INFO][6379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" iface="eth0" netns="" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.813 [INFO][6379] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.813 [INFO][6379] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.871 [INFO][6386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.871 [INFO][6386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.871 [INFO][6386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.884 [WARNING][6386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.884 [INFO][6386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" HandleID="k8s-pod-network.54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Workload="ip--172--31--16--149-k8s-calico--kube--controllers--76479b7b8b--5q868-eth0" Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.886 [INFO][6386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:04.892760 containerd[2098]: 2026-04-17 23:47:04.889 [INFO][6379] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81" Apr 17 23:47:04.895215 containerd[2098]: time="2026-04-17T23:47:04.892799699Z" level=info msg="TearDown network for sandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" successfully" Apr 17 23:47:04.945088 containerd[2098]: time="2026-04-17T23:47:04.945042828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:04.945219 containerd[2098]: time="2026-04-17T23:47:04.945136497Z" level=info msg="RemovePodSandbox \"54e578b2fbee5227f2ba6a07c8442de667588dd2e8b38a66580e5e2d0543cf81\" returns successfully" Apr 17 23:47:04.946214 containerd[2098]: time="2026-04-17T23:47:04.946138565Z" level=info msg="StopPodSandbox for \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\"" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.039 [WARNING][6400] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f71864c-b114-4755-a146-e6f57d67291e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120", Pod:"coredns-674b8bbfcf-7ggvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia77a608c46a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.040 [INFO][6400] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.040 [INFO][6400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" iface="eth0" netns="" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.040 [INFO][6400] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.040 [INFO][6400] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.092 [INFO][6407] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.092 [INFO][6407] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.093 [INFO][6407] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.105 [WARNING][6407] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.105 [INFO][6407] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.107 [INFO][6407] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:05.122392 containerd[2098]: 2026-04-17 23:47:05.110 [INFO][6400] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.123508 containerd[2098]: time="2026-04-17T23:47:05.123224383Z" level=info msg="TearDown network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" successfully" Apr 17 23:47:05.123508 containerd[2098]: time="2026-04-17T23:47:05.123262537Z" level=info msg="StopPodSandbox for \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" returns successfully" Apr 17 23:47:05.124165 containerd[2098]: time="2026-04-17T23:47:05.124076782Z" level=info msg="RemovePodSandbox for \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\"" Apr 17 23:47:05.124572 containerd[2098]: time="2026-04-17T23:47:05.124278052Z" level=info msg="Forcibly stopping sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\"" Apr 17 23:47:05.224653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434862866.mount: Deactivated successfully. Apr 17 23:47:05.244247 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:05.248496 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:05.244288 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:05.288169 containerd[2098]: time="2026-04-17T23:47:05.287763387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.205 [WARNING][6421] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f71864c-b114-4755-a146-e6f57d67291e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"a82ce7cd8a50281754fbd67396f31e013d2df32655317d4cf042d38829825120", Pod:"coredns-674b8bbfcf-7ggvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia77a608c46a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.208 [INFO][6421] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.208 [INFO][6421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" iface="eth0" netns="" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.208 [INFO][6421] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.208 [INFO][6421] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.272 [INFO][6428] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.273 [INFO][6428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.273 [INFO][6428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.285 [WARNING][6428] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.285 [INFO][6428] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" HandleID="k8s-pod-network.220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Workload="ip--172--31--16--149-k8s-coredns--674b8bbfcf--7ggvd-eth0" Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.287 [INFO][6428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:05.322147 containerd[2098]: 2026-04-17 23:47:05.292 [INFO][6421] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023" Apr 17 23:47:05.322147 containerd[2098]: time="2026-04-17T23:47:05.320595158Z" level=info msg="TearDown network for sandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" successfully" Apr 17 23:47:05.442856 containerd[2098]: time="2026-04-17T23:47:05.357190850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:47:05.459240 containerd[2098]: time="2026-04-17T23:47:05.435598139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:05.459240 containerd[2098]: time="2026-04-17T23:47:05.459071347Z" level=info msg="RemovePodSandbox \"220205a23411d17f2521b7e4057e3d7aa8b6e4d5ae7de3a39e56a4f289af9023\" returns successfully" Apr 17 23:47:05.459240 containerd[2098]: time="2026-04-17T23:47:05.436105971Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:05.466450 containerd[2098]: time="2026-04-17T23:47:05.466393796Z" level=info msg="StopPodSandbox for \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\"" Apr 17 23:47:05.488373 containerd[2098]: time="2026-04-17T23:47:05.488326541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:05.490407 containerd[2098]: time="2026-04-17T23:47:05.490223116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 4.394829527s" Apr 17 23:47:05.492099 containerd[2098]: time="2026-04-17T23:47:05.491941799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:47:05.575687 sshd[6361]: Accepted publickey for core from 20.229.252.112 port 47476 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:05.589649 containerd[2098]: time="2026-04-17T23:47:05.589540390Z" level=info msg="CreateContainer within sandbox \"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:47:05.590520 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:05.645314 systemd-logind[2068]: New session 9 of user core. Apr 17 23:47:05.653173 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.581 [WARNING][6459] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.581 [INFO][6459] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.581 [INFO][6459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" iface="eth0" netns="" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.581 [INFO][6459] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.581 [INFO][6459] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.659 [INFO][6467] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.660 [INFO][6467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.660 [INFO][6467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.673 [WARNING][6467] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.674 [INFO][6467] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.678 [INFO][6467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:05.690815 containerd[2098]: 2026-04-17 23:47:05.683 [INFO][6459] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.690815 containerd[2098]: time="2026-04-17T23:47:05.689996098Z" level=info msg="TearDown network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" successfully" Apr 17 23:47:05.690815 containerd[2098]: time="2026-04-17T23:47:05.690046938Z" level=info msg="StopPodSandbox for \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" returns successfully" Apr 17 23:47:05.690815 containerd[2098]: time="2026-04-17T23:47:05.690715877Z" level=info msg="RemovePodSandbox for \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\"" Apr 17 23:47:05.690815 containerd[2098]: time="2026-04-17T23:47:05.690768524Z" level=info msg="Forcibly stopping sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\"" Apr 17 23:47:05.779671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567369559.mount: Deactivated successfully. Apr 17 23:47:05.813515 containerd[2098]: time="2026-04-17T23:47:05.813453515Z" level=info msg="CreateContainer within sandbox \"5403332fc29354706f0f6a07e6a84fca603adfe77988b44d9b3e28b9a9ff2eea\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e83dafd228c18611af6aaffa2c4403e027c09b2cc1af9c95fabe3fde86ec4ad4\"" Apr 17 23:47:05.823880 containerd[2098]: time="2026-04-17T23:47:05.823604081Z" level=info msg="StartContainer for \"e83dafd228c18611af6aaffa2c4403e027c09b2cc1af9c95fabe3fde86ec4ad4\"" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.763 [WARNING][6484] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" WorkloadEndpoint="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.764 [INFO][6484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.764 [INFO][6484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" iface="eth0" netns="" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.768 [INFO][6484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.770 [INFO][6484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.852 [INFO][6491] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.852 [INFO][6491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.852 [INFO][6491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.867 [WARNING][6491] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.867 [INFO][6491] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" HandleID="k8s-pod-network.9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Workload="ip--172--31--16--149-k8s-whisker--c965584d8--dhqvj-eth0" Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.870 [INFO][6491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:05.889316 containerd[2098]: 2026-04-17 23:47:05.883 [INFO][6484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0" Apr 17 23:47:05.890458 containerd[2098]: time="2026-04-17T23:47:05.890425525Z" level=info msg="TearDown network for sandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" successfully" Apr 17 23:47:05.904913 containerd[2098]: time="2026-04-17T23:47:05.904858344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:05.905366 containerd[2098]: time="2026-04-17T23:47:05.904961858Z" level=info msg="RemovePodSandbox \"9ff60b53c5905f0aa8915369b74709eff2f23dd2b27be816f61198945a6a0ab0\" returns successfully" Apr 17 23:47:05.905971 containerd[2098]: time="2026-04-17T23:47:05.905945791Z" level=info msg="StopPodSandbox for \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\"" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:05.986 [WARNING][6509] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"3bfd7323-fa86-459b-911b-3e898630bb72", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1", Pod:"calico-apiserver-64bcf5fd68-fg4l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid82e19c919c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:05.987 [INFO][6509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:05.988 [INFO][6509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" iface="eth0" netns="" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:05.988 [INFO][6509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:05.988 [INFO][6509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.049 [INFO][6519] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.050 [INFO][6519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.050 [INFO][6519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.064 [WARNING][6519] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.064 [INFO][6519] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.066 [INFO][6519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:06.077844 containerd[2098]: 2026-04-17 23:47:06.073 [INFO][6509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.077844 containerd[2098]: time="2026-04-17T23:47:06.076736425Z" level=info msg="TearDown network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" successfully" Apr 17 23:47:06.077844 containerd[2098]: time="2026-04-17T23:47:06.076783235Z" level=info msg="StopPodSandbox for \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" returns successfully" Apr 17 23:47:06.077844 containerd[2098]: time="2026-04-17T23:47:06.077722067Z" level=info msg="RemovePodSandbox for \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\"" Apr 17 23:47:06.077844 containerd[2098]: time="2026-04-17T23:47:06.077758703Z" level=info msg="Forcibly stopping sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\"" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.159 [WARNING][6534] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"3bfd7323-fa86-459b-911b-3e898630bb72", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"bd465aeb80fd9972f3558a13e5d2fb677b4ae57ba3be9bcf784b4d53a685adb1", Pod:"calico-apiserver-64bcf5fd68-fg4l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid82e19c919c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.160 [INFO][6534] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.160 [INFO][6534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" iface="eth0" netns="" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.160 [INFO][6534] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.160 [INFO][6534] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.205 [INFO][6546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.205 [INFO][6546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.205 [INFO][6546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.217 [WARNING][6546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.217 [INFO][6546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" HandleID="k8s-pod-network.6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--fg4l8-eth0" Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.220 [INFO][6546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:06.235789 containerd[2098]: 2026-04-17 23:47:06.225 [INFO][6534] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036" Apr 17 23:47:06.235789 containerd[2098]: time="2026-04-17T23:47:06.234460679Z" level=info msg="TearDown network for sandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" successfully" Apr 17 23:47:06.275409 containerd[2098]: time="2026-04-17T23:47:06.274089919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:06.275692 containerd[2098]: time="2026-04-17T23:47:06.275662597Z" level=info msg="RemovePodSandbox \"6f3bc17098059f89334045cafbcc1ba2720ebb34dc3d2ec9fa016087c9dd3036\" returns successfully" Apr 17 23:47:06.277569 containerd[2098]: time="2026-04-17T23:47:06.276669539Z" level=info msg="StopPodSandbox for \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\"" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.350 [WARNING][6563] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28650620-bee7-44f0-92c4-6968b30d2305", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda", Pod:"goldmane-5b85766d88-c6h7b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9711373a25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.351 [INFO][6563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.351 [INFO][6563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" iface="eth0" netns="" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.351 [INFO][6563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.351 [INFO][6563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.417 [INFO][6572] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.417 [INFO][6572] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.417 [INFO][6572] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.431 [WARNING][6572] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.431 [INFO][6572] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.433 [INFO][6572] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:06.446737 containerd[2098]: 2026-04-17 23:47:06.440 [INFO][6563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.447702 containerd[2098]: time="2026-04-17T23:47:06.447667866Z" level=info msg="TearDown network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" successfully" Apr 17 23:47:06.447803 containerd[2098]: time="2026-04-17T23:47:06.447787596Z" level=info msg="StopPodSandbox for \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" returns successfully" Apr 17 23:47:06.448535 containerd[2098]: time="2026-04-17T23:47:06.448350594Z" level=info msg="RemovePodSandbox for \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\"" Apr 17 23:47:06.448535 containerd[2098]: time="2026-04-17T23:47:06.448385506Z" level=info msg="Forcibly stopping sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\"" Apr 17 23:47:06.550312 systemd[1]: run-containerd-runc-k8s.io-e83dafd228c18611af6aaffa2c4403e027c09b2cc1af9c95fabe3fde86ec4ad4-runc.BqYcJL.mount: Deactivated successfully. Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.600 [WARNING][6591] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28650620-bee7-44f0-92c4-6968b30d2305", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"1df20020163592debb521a99d3967e24e28160b0f39e50e29563576b3022bdda", Pod:"goldmane-5b85766d88-c6h7b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9711373a25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.601 [INFO][6591] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.601 [INFO][6591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" iface="eth0" netns="" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.601 [INFO][6591] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.601 [INFO][6591] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.681 [INFO][6613] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.685 [INFO][6613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.685 [INFO][6613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.696 [WARNING][6613] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.696 [INFO][6613] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" HandleID="k8s-pod-network.955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Workload="ip--172--31--16--149-k8s-goldmane--5b85766d88--c6h7b-eth0" Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.698 [INFO][6613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:06.706757 containerd[2098]: 2026-04-17 23:47:06.702 [INFO][6591] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595" Apr 17 23:47:06.708292 containerd[2098]: time="2026-04-17T23:47:06.707463026Z" level=info msg="TearDown network for sandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" successfully" Apr 17 23:47:06.722668 containerd[2098]: time="2026-04-17T23:47:06.722245882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:06.722668 containerd[2098]: time="2026-04-17T23:47:06.722330539Z" level=info msg="RemovePodSandbox \"955ca64097af48fbf825ec77f40a819f2383eea8b90f5de28874ead2ae7ea595\" returns successfully" Apr 17 23:47:06.759696 containerd[2098]: time="2026-04-17T23:47:06.759271642Z" level=info msg="StartContainer for \"e83dafd228c18611af6aaffa2c4403e027c09b2cc1af9c95fabe3fde86ec4ad4\" returns successfully" Apr 17 23:47:06.771388 containerd[2098]: time="2026-04-17T23:47:06.771032159Z" level=info msg="StopPodSandbox for \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\"" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.843 [WARNING][6637] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15f67ed1-2981-42fd-8b37-94a71c9f9349", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d", Pod:"csi-node-driver-ch9lf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d979f851a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.843 [INFO][6637] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.843 [INFO][6637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" iface="eth0" netns="" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.843 [INFO][6637] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.843 [INFO][6637] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.911 [INFO][6646] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.914 [INFO][6646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.914 [INFO][6646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.931 [WARNING][6646] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.931 [INFO][6646] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.937 [INFO][6646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:06.951404 containerd[2098]: 2026-04-17 23:47:06.946 [INFO][6637] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:06.951404 containerd[2098]: time="2026-04-17T23:47:06.950567625Z" level=info msg="TearDown network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" successfully" Apr 17 23:47:06.951404 containerd[2098]: time="2026-04-17T23:47:06.950597477Z" level=info msg="StopPodSandbox for \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" returns successfully" Apr 17 23:47:06.992661 containerd[2098]: time="2026-04-17T23:47:06.992535326Z" level=info msg="RemovePodSandbox for \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\"" Apr 17 23:47:06.992661 containerd[2098]: time="2026-04-17T23:47:06.992581809Z" level=info msg="Forcibly stopping sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\"" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.054 [WARNING][6660] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15f67ed1-2981-42fd-8b37-94a71c9f9349", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"82b74f545b6870d52b3e62a1ecb27d6217c1e362699118a240e4bf1c74782c2d", Pod:"csi-node-driver-ch9lf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d979f851a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.055 [INFO][6660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.055 [INFO][6660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" iface="eth0" netns="" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.055 [INFO][6660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.055 [INFO][6660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.092 [INFO][6668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.092 [INFO][6668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.093 [INFO][6668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.113 [WARNING][6668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.113 [INFO][6668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" HandleID="k8s-pod-network.dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Workload="ip--172--31--16--149-k8s-csi--node--driver--ch9lf-eth0" Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.120 [INFO][6668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:07.131416 containerd[2098]: 2026-04-17 23:47:07.125 [INFO][6660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe" Apr 17 23:47:07.132136 containerd[2098]: time="2026-04-17T23:47:07.131468303Z" level=info msg="TearDown network for sandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" successfully" Apr 17 23:47:07.146900 containerd[2098]: time="2026-04-17T23:47:07.146849967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:07.147322 containerd[2098]: time="2026-04-17T23:47:07.147209254Z" level=info msg="RemovePodSandbox \"dd6d8aea20191c7ea0fcf39e4faec02cad4ce1745226d3169f702f7accd6d9fe\" returns successfully" Apr 17 23:47:07.190641 containerd[2098]: time="2026-04-17T23:47:07.189027839Z" level=info msg="StopPodSandbox for \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\"" Apr 17 23:47:07.294718 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:07.295373 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:07.294730 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.368 [WARNING][6682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"2ac59079-9564-4c61-aa81-95cd5165fe7e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de", Pod:"calico-apiserver-64bcf5fd68-9j2x7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali175bed3475d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.370 [INFO][6682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.370 [INFO][6682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" iface="eth0" netns="" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.370 [INFO][6682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.370 [INFO][6682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.471 [INFO][6691] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.471 [INFO][6691] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.471 [INFO][6691] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.490 [WARNING][6691] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.490 [INFO][6691] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.493 [INFO][6691] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:07.521790 containerd[2098]: 2026-04-17 23:47:07.505 [INFO][6682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.521790 containerd[2098]: time="2026-04-17T23:47:07.519430839Z" level=info msg="TearDown network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" successfully" Apr 17 23:47:07.521790 containerd[2098]: time="2026-04-17T23:47:07.519456978Z" level=info msg="StopPodSandbox for \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" returns successfully" Apr 17 23:47:07.521790 containerd[2098]: time="2026-04-17T23:47:07.520350430Z" level=info msg="RemovePodSandbox for \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\"" Apr 17 23:47:07.521790 containerd[2098]: time="2026-04-17T23:47:07.520399516Z" level=info msg="Forcibly stopping sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\"" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.651 [WARNING][6705] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0", GenerateName:"calico-apiserver-64bcf5fd68-", Namespace:"calico-system", SelfLink:"", UID:"2ac59079-9564-4c61-aa81-95cd5165fe7e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64bcf5fd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-149", ContainerID:"86bbba2a0dd4db9f8bdef06692fd0a14918660d7aaf721b12aea778c8c5fc0de", Pod:"calico-apiserver-64bcf5fd68-9j2x7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali175bed3475d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.653 [INFO][6705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.653 [INFO][6705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" iface="eth0" netns="" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.653 [INFO][6705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.653 [INFO][6705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.721 [INFO][6713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.722 [INFO][6713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.722 [INFO][6713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.731 [WARNING][6713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.731 [INFO][6713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" HandleID="k8s-pod-network.8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Workload="ip--172--31--16--149-k8s-calico--apiserver--64bcf5fd68--9j2x7-eth0" Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.732 [INFO][6713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:07.739855 containerd[2098]: 2026-04-17 23:47:07.735 [INFO][6705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b" Apr 17 23:47:07.742308 containerd[2098]: time="2026-04-17T23:47:07.741119102Z" level=info msg="TearDown network for sandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" successfully" Apr 17 23:47:07.750890 containerd[2098]: time="2026-04-17T23:47:07.750367740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:07.750890 containerd[2098]: time="2026-04-17T23:47:07.750459567Z" level=info msg="RemovePodSandbox \"8a133c50a2c08885c0046e4fa8e8ded8831d90d78c495fc1d62288ea9c54f17b\" returns successfully" Apr 17 23:47:08.077363 sshd[6361]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:08.088047 systemd[1]: sshd@8-172.31.16.149:22-20.229.252.112:47476.service: Deactivated successfully. Apr 17 23:47:08.092346 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:47:08.093626 systemd-logind[2068]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:47:08.096327 systemd-logind[2068]: Removed session 9. Apr 17 23:47:09.342135 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:09.341795 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:09.341806 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:09.609355 kubelet[3359]: I0417 23:47:09.609207 3359 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:47:09.742315 kubelet[3359]: I0417 23:47:09.714948 3359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74888fcb5-xlnn8" podStartSLOduration=7.998577712 podStartE2EDuration="29.714923683s" podCreationTimestamp="2026-04-17 23:46:40 +0000 UTC" firstStartedPulling="2026-04-17 23:46:43.784773514 +0000 UTC m=+43.421961185" lastFinishedPulling="2026-04-17 23:47:05.501119496 +0000 UTC m=+65.138307156" observedRunningTime="2026-04-17 23:47:07.553460847 +0000 UTC m=+67.190648527" watchObservedRunningTime="2026-04-17 23:47:09.714923683 +0000 UTC m=+69.352111362" Apr 17 23:47:11.410592 systemd[1]: run-containerd-runc-k8s.io-96ecae4a6b7d62dad8f3ac6e460108a3f1dc9104b908fbf4b10b490fad9c2f04-runc.fFqOD4.mount: Deactivated successfully. Apr 17 23:47:13.252496 systemd[1]: Started sshd@9-172.31.16.149:22-20.229.252.112:51410.service - OpenSSH per-connection server daemon (20.229.252.112:51410). Apr 17 23:47:14.339171 sshd[6771]: Accepted publickey for core from 20.229.252.112 port 51410 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:14.343623 sshd[6771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:14.350671 systemd-logind[2068]: New session 10 of user core. Apr 17 23:47:14.354409 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:47:14.910451 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:14.910302 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:14.910331 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:15.770796 sshd[6771]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:15.776324 systemd-logind[2068]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:47:15.777114 systemd[1]: sshd@9-172.31.16.149:22-20.229.252.112:51410.service: Deactivated successfully. Apr 17 23:47:15.787379 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:47:15.792040 systemd-logind[2068]: Removed session 10. Apr 17 23:47:15.948777 systemd[1]: Started sshd@10-172.31.16.149:22-20.229.252.112:39502.service - OpenSSH per-connection server daemon (20.229.252.112:39502). Apr 17 23:47:16.988334 sshd[6800]: Accepted publickey for core from 20.229.252.112 port 39502 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:16.991074 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:16.997121 systemd-logind[2068]: New session 11 of user core. Apr 17 23:47:17.003441 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:47:18.048144 sshd[6800]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:18.057813 systemd[1]: sshd@10-172.31.16.149:22-20.229.252.112:39502.service: Deactivated successfully. Apr 17 23:47:18.059718 systemd-logind[2068]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:47:18.064148 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:47:18.065825 systemd-logind[2068]: Removed session 11. Apr 17 23:47:18.209422 systemd[1]: Started sshd@11-172.31.16.149:22-20.229.252.112:39506.service - OpenSSH per-connection server daemon (20.229.252.112:39506). Apr 17 23:47:19.249734 sshd[6821]: Accepted publickey for core from 20.229.252.112 port 39506 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:19.263861 sshd[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:19.274679 systemd-logind[2068]: New session 12 of user core. Apr 17 23:47:19.280443 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:47:20.102342 sshd[6821]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:20.112526 systemd[1]: sshd@11-172.31.16.149:22-20.229.252.112:39506.service: Deactivated successfully. Apr 17 23:47:20.119047 systemd-logind[2068]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:47:20.119942 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:47:20.122332 systemd-logind[2068]: Removed session 12. Apr 17 23:47:20.925093 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:20.926156 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:20.925104 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:25.267389 systemd[1]: Started sshd@12-172.31.16.149:22-20.229.252.112:48410.service - OpenSSH per-connection server daemon (20.229.252.112:48410). Apr 17 23:47:26.292575 sshd[6874]: Accepted publickey for core from 20.229.252.112 port 48410 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:26.296950 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:26.302949 systemd-logind[2068]: New session 13 of user core. Apr 17 23:47:26.306404 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:47:27.676477 sshd[6874]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:27.688138 systemd[1]: sshd@12-172.31.16.149:22-20.229.252.112:48410.service: Deactivated successfully. Apr 17 23:47:27.694088 systemd-logind[2068]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:47:27.694248 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:47:27.698092 systemd-logind[2068]: Removed session 13. Apr 17 23:47:27.838697 systemd[1]: Started sshd@13-172.31.16.149:22-20.229.252.112:48422.service - OpenSSH per-connection server daemon (20.229.252.112:48422). Apr 17 23:47:28.813732 sshd[6909]: Accepted publickey for core from 20.229.252.112 port 48422 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:28.815505 sshd[6909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:28.821109 systemd-logind[2068]: New session 14 of user core. Apr 17 23:47:28.825393 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:47:28.925098 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:28.926182 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:28.925138 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:30.031410 sshd[6909]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:30.043440 systemd-logind[2068]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:47:30.044478 systemd[1]: sshd@13-172.31.16.149:22-20.229.252.112:48422.service: Deactivated successfully. Apr 17 23:47:30.049502 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:47:30.051153 systemd-logind[2068]: Removed session 14. Apr 17 23:47:30.203791 systemd[1]: Started sshd@14-172.31.16.149:22-20.229.252.112:48428.service - OpenSSH per-connection server daemon (20.229.252.112:48428). Apr 17 23:47:30.973742 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:30.974275 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:30.973750 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:31.239844 sshd[6921]: Accepted publickey for core from 20.229.252.112 port 48428 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:31.250205 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:31.268575 systemd-logind[2068]: New session 15 of user core. Apr 17 23:47:31.271393 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:47:32.646732 sshd[6921]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:32.656449 systemd-logind[2068]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:47:32.657156 systemd[1]: sshd@14-172.31.16.149:22-20.229.252.112:48428.service: Deactivated successfully. Apr 17 23:47:32.665101 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:47:32.669528 systemd-logind[2068]: Removed session 15. Apr 17 23:47:32.805667 systemd[1]: Started sshd@15-172.31.16.149:22-20.229.252.112:48432.service - OpenSSH per-connection server daemon (20.229.252.112:48432). Apr 17 23:47:33.786043 sshd[6953]: Accepted publickey for core from 20.229.252.112 port 48432 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:33.786874 sshd[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:33.792233 systemd-logind[2068]: New session 16 of user core. Apr 17 23:47:33.798684 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:47:34.940099 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:34.942159 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:34.940129 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:35.434674 sshd[6953]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:35.443691 systemd[1]: sshd@15-172.31.16.149:22-20.229.252.112:48432.service: Deactivated successfully. Apr 17 23:47:35.449848 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:47:35.452559 systemd-logind[2068]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:47:35.455469 systemd-logind[2068]: Removed session 16. Apr 17 23:47:35.599448 systemd[1]: Started sshd@16-172.31.16.149:22-20.229.252.112:41968.service - OpenSSH per-connection server daemon (20.229.252.112:41968). Apr 17 23:47:36.609698 sshd[6965]: Accepted publickey for core from 20.229.252.112 port 41968 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:36.613264 sshd[6965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:36.619710 systemd-logind[2068]: New session 17 of user core. Apr 17 23:47:36.624407 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:47:36.988321 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:36.990041 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:36.988330 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:37.615716 sshd[6965]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:37.620136 systemd[1]: sshd@16-172.31.16.149:22-20.229.252.112:41968.service: Deactivated successfully. Apr 17 23:47:37.630574 systemd-logind[2068]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:47:37.631530 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:47:37.634113 systemd-logind[2068]: Removed session 17. Apr 17 23:47:42.341965 systemd[1]: run-containerd-runc-k8s.io-56d87e0b64d0c54f1657bab8be2e360dc0eb9205d63441dd80dc74305ed3977a-runc.W0PLI3.mount: Deactivated successfully. Apr 17 23:47:42.786346 systemd[1]: Started sshd@17-172.31.16.149:22-20.229.252.112:41972.service - OpenSSH per-connection server daemon (20.229.252.112:41972). Apr 17 23:47:43.811896 sshd[7006]: Accepted publickey for core from 20.229.252.112 port 41972 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:43.817273 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:43.824095 systemd-logind[2068]: New session 18 of user core. Apr 17 23:47:43.827393 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:47:44.924311 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:44.926239 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:44.924350 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:44.966258 sshd[7006]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:44.970513 systemd[1]: sshd@17-172.31.16.149:22-20.229.252.112:41972.service: Deactivated successfully. Apr 17 23:47:44.972717 systemd-logind[2068]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:47:44.977261 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:47:44.980390 systemd-logind[2068]: Removed session 18. Apr 17 23:47:46.973643 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:46.974071 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:46.973653 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:50.145158 systemd[1]: Started sshd@18-172.31.16.149:22-20.229.252.112:38438.service - OpenSSH per-connection server daemon (20.229.252.112:38438). Apr 17 23:47:51.226101 sshd[7039]: Accepted publickey for core from 20.229.252.112 port 38438 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:51.230444 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:51.243626 systemd-logind[2068]: New session 19 of user core. Apr 17 23:47:51.248469 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:47:51.756300 systemd[1]: run-containerd-runc-k8s.io-bf66e22b392e789002858b1d075a405d3067dfe455e29be97c6f1b72403fa442-runc.fwnVwE.mount: Deactivated successfully. Apr 17 23:47:52.616733 sshd[7039]: pam_unix(sshd:session): session closed for user core Apr 17 23:47:52.624237 systemd[1]: sshd@18-172.31.16.149:22-20.229.252.112:38438.service: Deactivated successfully. Apr 17 23:47:52.631171 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:47:52.632164 systemd-logind[2068]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:47:52.633529 systemd-logind[2068]: Removed session 19. Apr 17 23:47:52.924138 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:47:52.926143 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:47:52.924149 systemd-resolved[1997]: Flushed all caches. Apr 17 23:47:57.772383 systemd[1]: Started sshd@19-172.31.16.149:22-20.229.252.112:56556.service - OpenSSH per-connection server daemon (20.229.252.112:56556). Apr 17 23:47:58.798662 sshd[7093]: Accepted publickey for core from 20.229.252.112 port 56556 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:47:58.803296 sshd[7093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:47:58.810021 systemd-logind[2068]: New session 20 of user core. Apr 17 23:47:58.813609 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:48:00.523082 sshd[7093]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:00.526948 systemd[1]: sshd@19-172.31.16.149:22-20.229.252.112:56556.service: Deactivated successfully. Apr 17 23:48:00.533134 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:48:00.535454 systemd-logind[2068]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:48:00.536688 systemd-logind[2068]: Removed session 20. Apr 17 23:48:00.988455 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:48:00.988489 systemd-resolved[1997]: Flushed all caches. Apr 17 23:48:00.990447 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:48:05.703391 systemd[1]: Started sshd@20-172.31.16.149:22-20.229.252.112:59214.service - OpenSSH per-connection server daemon (20.229.252.112:59214). Apr 17 23:48:06.760366 sshd[7124]: Accepted publickey for core from 20.229.252.112 port 59214 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:48:06.762910 sshd[7124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:06.768755 systemd-logind[2068]: New session 21 of user core. Apr 17 23:48:06.772364 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:48:07.939719 sshd[7124]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:07.946864 systemd[1]: sshd@20-172.31.16.149:22-20.229.252.112:59214.service: Deactivated successfully. Apr 17 23:48:07.953988 systemd-logind[2068]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:48:07.954070 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:48:07.956647 systemd-logind[2068]: Removed session 21. Apr 17 23:48:08.924869 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:48:08.924902 systemd-resolved[1997]: Flushed all caches. Apr 17 23:48:08.927024 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:48:23.773709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a-rootfs.mount: Deactivated successfully. Apr 17 23:48:23.871923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b-rootfs.mount: Deactivated successfully. Apr 17 23:48:23.897836 containerd[2098]: time="2026-04-17T23:48:23.850753975Z" level=info msg="shim disconnected" id=01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a namespace=k8s.io Apr 17 23:48:23.904377 containerd[2098]: time="2026-04-17T23:48:23.897848973Z" level=warning msg="cleaning up after shim disconnected" id=01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a namespace=k8s.io Apr 17 23:48:23.904377 containerd[2098]: time="2026-04-17T23:48:23.897878432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:23.904377 containerd[2098]: time="2026-04-17T23:48:23.870203543Z" level=info msg="shim disconnected" id=213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b namespace=k8s.io Apr 17 23:48:23.904377 containerd[2098]: time="2026-04-17T23:48:23.898083354Z" level=warning msg="cleaning up after shim disconnected" id=213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b namespace=k8s.io Apr 17 23:48:23.904377 containerd[2098]: time="2026-04-17T23:48:23.898096518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:23.994967 containerd[2098]: time="2026-04-17T23:48:23.994917331Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:48:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:48:24.362858 kubelet[3359]: E0417 23:48:24.345441 3359 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 23:48:24.899751 kubelet[3359]: I0417 23:48:24.899701 3359 scope.go:117] "RemoveContainer" containerID="01f4515c6ed99b02ec9206a920079b5b8229e5b29d0e420d81464de6919feb2a" Apr 17 23:48:24.904855 kubelet[3359]: I0417 23:48:24.904713 3359 scope.go:117] "RemoveContainer" containerID="213e3240e91e25041684e2ba27ad3d0fece25a1eaceb2cde4e6d5cd39edb0f0b" Apr 17 23:48:25.015143 containerd[2098]: time="2026-04-17T23:48:25.014939399Z" level=info msg="CreateContainer within sandbox \"f049c7a9ef246dcc32dbca6a96c78a6d92d539ac02ce16ee3f1a4781cf8e0616\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:48:25.020835 containerd[2098]: time="2026-04-17T23:48:25.019656395Z" level=info msg="CreateContainer within sandbox \"5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:48:25.156038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180188789.mount: Deactivated successfully. Apr 17 23:48:25.165766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647879607.mount: Deactivated successfully. Apr 17 23:48:25.190046 containerd[2098]: time="2026-04-17T23:48:25.189437669Z" level=info msg="CreateContainer within sandbox \"f049c7a9ef246dcc32dbca6a96c78a6d92d539ac02ce16ee3f1a4781cf8e0616\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9cdfcc3a2be8ced25956484e138673fd0cbecad76b73fa47223e00238f816b54\"" Apr 17 23:48:25.206102 containerd[2098]: time="2026-04-17T23:48:25.205530227Z" level=info msg="StartContainer for \"9cdfcc3a2be8ced25956484e138673fd0cbecad76b73fa47223e00238f816b54\"" Apr 17 23:48:25.212636 containerd[2098]: time="2026-04-17T23:48:25.212559973Z" level=info msg="CreateContainer within sandbox \"5a1894e971bc2d08c5a8b15d881aa40cb8db565fac988954859ce998944ada66\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8f9220177a38df64c58b292f25fb4256ef014ab82d16a17f58b105db67c82fba\"" Apr 17 23:48:25.215227 containerd[2098]: time="2026-04-17T23:48:25.215190371Z" level=info msg="StartContainer for \"8f9220177a38df64c58b292f25fb4256ef014ab82d16a17f58b105db67c82fba\"" Apr 17 23:48:25.372580 containerd[2098]: time="2026-04-17T23:48:25.372422254Z" level=info msg="StartContainer for \"8f9220177a38df64c58b292f25fb4256ef014ab82d16a17f58b105db67c82fba\" returns successfully" Apr 17 23:48:25.384120 containerd[2098]: time="2026-04-17T23:48:25.384079337Z" level=info msg="StartContainer for \"9cdfcc3a2be8ced25956484e138673fd0cbecad76b73fa47223e00238f816b54\" returns successfully" Apr 17 23:48:26.974074 systemd-journald[1582]: Under memory pressure, flushing caches. Apr 17 23:48:26.972118 systemd-resolved[1997]: Under memory pressure, flushing caches. Apr 17 23:48:26.972162 systemd-resolved[1997]: Flushed all caches. Apr 17 23:48:28.431264 containerd[2098]: time="2026-04-17T23:48:28.431180537Z" level=info msg="shim disconnected" id=fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e namespace=k8s.io Apr 17 23:48:28.431901 containerd[2098]: time="2026-04-17T23:48:28.431338297Z" level=warning msg="cleaning up after shim disconnected" id=fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e namespace=k8s.io Apr 17 23:48:28.431901 containerd[2098]: time="2026-04-17T23:48:28.431353885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:48:28.434379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e-rootfs.mount: Deactivated successfully. Apr 17 23:48:28.931379 kubelet[3359]: I0417 23:48:28.931341 3359 scope.go:117] "RemoveContainer" containerID="fd5cc2a751a44384d112a2375a64ae84293502b1a24d52827e07c170b858ec5e" Apr 17 23:48:28.933767 containerd[2098]: time="2026-04-17T23:48:28.933726274Z" level=info msg="CreateContainer within sandbox \"9c9d672d9a3efd9d8980e56f38a8adee8d12c45c70a890d3689e38657a07ac42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:48:28.982101 containerd[2098]: time="2026-04-17T23:48:28.980804964Z" level=info msg="CreateContainer within sandbox \"9c9d672d9a3efd9d8980e56f38a8adee8d12c45c70a890d3689e38657a07ac42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"58394c3cf0d645e43b5357b6aaf3aaddfe0cec86efcf2c381b9a6863827e53bd\"" Apr 17 23:48:28.984240 containerd[2098]: time="2026-04-17T23:48:28.983062796Z" level=info msg="StartContainer for \"58394c3cf0d645e43b5357b6aaf3aaddfe0cec86efcf2c381b9a6863827e53bd\"" Apr 17 23:48:29.071216 containerd[2098]: time="2026-04-17T23:48:29.071173716Z" level=info msg="StartContainer for \"58394c3cf0d645e43b5357b6aaf3aaddfe0cec86efcf2c381b9a6863827e53bd\" returns successfully" Apr 17 23:48:34.363985 kubelet[3359]: E0417 23:48:34.363938 3359 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-149?timeout=10s\": context deadline exceeded"