Apr 24 23:53:21.003029 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:53:21.003068 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:53:21.003086 kernel: BIOS-provided physical RAM map: Apr 24 23:53:21.003097 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:53:21.003108 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 24 23:53:21.003117 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 24 23:53:21.003130 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 24 23:53:21.003141 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 24 23:53:21.003152 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 24 23:53:21.003165 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 24 23:53:21.003176 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 24 23:53:21.003187 kernel: NX (Execute Disable) protection: active Apr 24 23:53:21.003198 kernel: APIC: Static calls initialized Apr 24 23:53:21.003210 kernel: efi: EFI v2.7 by EDK II Apr 24 23:53:21.003225 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 24 23:53:21.003240 kernel: SMBIOS 2.7 present. Apr 24 23:53:21.003253 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 24 23:53:21.003265 kernel: Hypervisor detected: KVM Apr 24 23:53:21.003279 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:53:21.003292 kernel: kvm-clock: using sched offset of 4340712631 cycles Apr 24 23:53:21.003307 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:53:21.003321 kernel: tsc: Detected 2499.998 MHz processor Apr 24 23:53:21.003334 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:53:21.003349 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:53:21.003362 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 24 23:53:21.003379 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:53:21.003393 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:53:21.003407 kernel: Using GB pages for direct mapping Apr 24 23:53:21.003420 kernel: Secure boot disabled Apr 24 23:53:21.003434 kernel: ACPI: Early table checksum verification disabled Apr 24 23:53:21.003448 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 24 23:53:21.003462 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:53:21.003476 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:53:21.003490 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:53:21.003507 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 24 23:53:21.003520 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 24 23:53:21.003534 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:53:21.003548 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:53:21.003562 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 24 23:53:21.003576 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 24 23:53:21.003596 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:53:21.003614 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:53:21.003629 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 24 23:53:21.003644 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 24 23:53:21.003659 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 24 23:53:21.003674 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 24 23:53:21.003689 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 24 23:53:21.003704 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 24 23:53:21.003722 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 24 23:53:21.003737 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 24 23:53:21.003752 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 24 23:53:21.003767 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 24 23:53:21.004310 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 24 23:53:21.004328 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 24 23:53:21.004343 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:53:21.004357 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:53:21.004371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 24 23:53:21.004390 kernel: NUMA: Initialized distance table, cnt=1 Apr 24 23:53:21.004404 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 24 23:53:21.004419 kernel: Zone ranges: Apr 24 23:53:21.004433 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:53:21.004447 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 24 23:53:21.004461 kernel: Normal empty Apr 24 23:53:21.004475 kernel: Movable zone start for each node Apr 24 23:53:21.004489 kernel: Early memory node ranges Apr 24 23:53:21.004503 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:53:21.004520 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 24 23:53:21.004534 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 24 23:53:21.004548 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 24 23:53:21.004563 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:53:21.004577 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:53:21.004591 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 24 23:53:21.004606 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 24 23:53:21.004620 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 24 23:53:21.004635 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:53:21.004649 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 24 23:53:21.004667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:53:21.004681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:53:21.004696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:53:21.004710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:53:21.004724 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:53:21.004738 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:53:21.004752 kernel: TSC deadline timer available Apr 24 23:53:21.004766 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:53:21.005921 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:53:21.005944 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 24 23:53:21.005961 kernel: Booting paravirtualized kernel on KVM Apr 24 23:53:21.005977 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:53:21.005994 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:53:21.006009 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:53:21.006025 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:53:21.006040 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:53:21.006056 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:53:21.006073 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:53:21.006095 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:53:21.006112 kernel: random: crng init done Apr 24 23:53:21.006127 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:53:21.006143 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:53:21.006159 kernel: Fallback order for Node 0: 0 Apr 24 23:53:21.006174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 24 23:53:21.006190 kernel: Policy zone: DMA32 Apr 24 23:53:21.006206 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:53:21.006226 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 24 23:53:21.006242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:53:21.006258 kernel: Kernel/User page tables isolation: enabled Apr 24 23:53:21.006274 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:53:21.006290 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:53:21.006305 kernel: Dynamic Preempt: voluntary Apr 24 23:53:21.006321 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:53:21.006338 kernel: rcu: RCU event tracing is enabled. Apr 24 23:53:21.006355 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:53:21.006375 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:53:21.006391 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:53:21.006407 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:53:21.006422 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:53:21.006437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:53:21.006454 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:53:21.006470 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:53:21.006503 kernel: Console: colour dummy device 80x25 Apr 24 23:53:21.006520 kernel: printk: console [tty0] enabled Apr 24 23:53:21.006536 kernel: printk: console [ttyS0] enabled Apr 24 23:53:21.006553 kernel: ACPI: Core revision 20230628 Apr 24 23:53:21.006570 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 24 23:53:21.006591 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:53:21.006607 kernel: x2apic enabled Apr 24 23:53:21.006624 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:53:21.006642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 24 23:53:21.006659 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 24 23:53:21.006679 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:53:21.006696 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:53:21.006713 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:53:21.006729 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:53:21.006746 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:53:21.006763 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:53:21.006795 kernel: RETBleed: Vulnerable Apr 24 23:53:21.006868 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:53:21.006882 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:53:21.006953 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:53:21.007080 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:53:21.007150 kernel: active return thunk: its_return_thunk Apr 24 23:53:21.007252 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:53:21.010826 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:53:21.010851 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:53:21.010866 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:53:21.010879 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 24 23:53:21.010895 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 24 23:53:21.010909 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:53:21.010926 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:53:21.010943 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:53:21.010966 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:53:21.010982 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:53:21.010999 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 24 23:53:21.011016 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 24 23:53:21.011033 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 24 23:53:21.011050 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 24 23:53:21.011067 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 24 23:53:21.011085 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 24 23:53:21.011102 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 24 23:53:21.011119 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:53:21.011136 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:53:21.011157 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:53:21.011174 kernel: landlock: Up and running. Apr 24 23:53:21.011191 kernel: SELinux: Initializing. Apr 24 23:53:21.011208 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:53:21.011226 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:53:21.011243 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Apr 24 23:53:21.011261 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:53:21.011278 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:53:21.011296 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:53:21.011312 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:53:21.011332 kernel: signal: max sigframe size: 3632 Apr 24 23:53:21.011348 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:53:21.011364 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:53:21.011381 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:53:21.011396 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:53:21.011411 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:53:21.011426 kernel: .... node #0, CPUs: #1 Apr 24 23:53:21.011443 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 24 23:53:21.011462 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:53:21.011481 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:53:21.011495 kernel: smpboot: Max logical packages: 1 Apr 24 23:53:21.011510 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 24 23:53:21.011526 kernel: devtmpfs: initialized Apr 24 23:53:21.011540 kernel: x86/mm: Memory block size: 128MB Apr 24 23:53:21.011555 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 24 23:53:21.011571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:53:21.011589 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:53:21.011605 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:53:21.011626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:53:21.011643 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:53:21.011660 kernel: audit: type=2000 audit(1777074800.683:1): state=initialized audit_enabled=0 res=1 Apr 24 23:53:21.011678 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:53:21.011693 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:53:21.011707 kernel: cpuidle: using governor menu Apr 24 23:53:21.011722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:53:21.011737 kernel: dca service started, version 1.12.1 Apr 24 23:53:21.011753 kernel: PCI: Using configuration type 1 for base access Apr 24 23:53:21.011795 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:53:21.011811 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:53:21.011826 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:53:21.011840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:53:21.011855 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:53:21.011869 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:53:21.011884 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:53:21.011898 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:53:21.011912 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 24 23:53:21.011931 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:53:21.011945 kernel: ACPI: Interpreter enabled Apr 24 23:53:21.011960 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:53:21.011976 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:53:21.011991 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:53:21.012007 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:53:21.012022 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 24 23:53:21.012037 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:53:21.012263 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:53:21.012411 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 24 23:53:21.012545 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 24 23:53:21.012564 kernel: acpiphp: Slot [3] registered Apr 24 23:53:21.012581 kernel: acpiphp: Slot [4] registered Apr 24 23:53:21.012596 kernel: acpiphp: Slot [5] registered Apr 24 23:53:21.012612 kernel: acpiphp: Slot [6] registered Apr 24 23:53:21.012628 kernel: acpiphp: Slot [7] registered Apr 24 23:53:21.012647 kernel: acpiphp: Slot [8] registered Apr 24 23:53:21.012662 kernel: acpiphp: Slot [9] registered Apr 24 23:53:21.012678 kernel: acpiphp: Slot [10] registered Apr 24 23:53:21.012694 kernel: acpiphp: Slot [11] registered Apr 24 23:53:21.012710 kernel: acpiphp: Slot [12] registered Apr 24 23:53:21.012726 kernel: acpiphp: Slot [13] registered Apr 24 23:53:21.012742 kernel: acpiphp: Slot [14] registered Apr 24 23:53:21.012758 kernel: acpiphp: Slot [15] registered Apr 24 23:53:21.012799 kernel: acpiphp: Slot [16] registered Apr 24 23:53:21.012816 kernel: acpiphp: Slot [17] registered Apr 24 23:53:21.012835 kernel: acpiphp: Slot [18] registered Apr 24 23:53:21.012852 kernel: acpiphp: Slot [19] registered Apr 24 23:53:21.012868 kernel: acpiphp: Slot [20] registered Apr 24 23:53:21.012884 kernel: acpiphp: Slot [21] registered Apr 24 23:53:21.012900 kernel: acpiphp: Slot [22] registered Apr 24 23:53:21.012916 kernel: acpiphp: Slot [23] registered Apr 24 23:53:21.012932 kernel: acpiphp: Slot [24] registered Apr 24 23:53:21.012949 kernel: acpiphp: Slot [25] registered Apr 24 23:53:21.012965 kernel: acpiphp: Slot [26] registered Apr 24 23:53:21.012984 kernel: acpiphp: Slot [27] registered Apr 24 23:53:21.012999 kernel: acpiphp: Slot [28] registered Apr 24 23:53:21.013016 kernel: acpiphp: Slot [29] registered Apr 24 23:53:21.013031 kernel: acpiphp: Slot [30] registered Apr 24 23:53:21.013048 kernel: acpiphp: Slot [31] registered Apr 24 23:53:21.013064 kernel: PCI host bridge to bus 0000:00 Apr 24 23:53:21.013210 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:53:21.013334 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:53:21.013456 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:53:21.013572 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 24 23:53:21.013687 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 24 23:53:21.016877 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:53:21.017078 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 24 23:53:21.017231 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 24 23:53:21.017372 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 24 23:53:21.017509 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 24 23:53:21.017639 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 24 23:53:21.018818 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 24 23:53:21.019010 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 24 23:53:21.019157 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 24 23:53:21.019296 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 24 23:53:21.019433 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 24 23:53:21.019586 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 24 23:53:21.019720 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 24 23:53:21.025991 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 24 23:53:21.026159 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 24 23:53:21.026301 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:53:21.026451 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:53:21.026598 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 24 23:53:21.026746 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:53:21.027208 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 24 23:53:21.027238 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:53:21.027253 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:53:21.027267 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:53:21.027283 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:53:21.027299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 24 23:53:21.027320 kernel: iommu: Default domain type: Translated Apr 24 23:53:21.027336 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:53:21.027351 kernel: efivars: Registered efivars operations Apr 24 23:53:21.027367 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:53:21.027383 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:53:21.027399 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 24 23:53:21.027414 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 24 23:53:21.027566 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 24 23:53:21.027713 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 24 23:53:21.031622 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:53:21.031654 kernel: vgaarb: loaded Apr 24 23:53:21.031673 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 24 23:53:21.031690 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 24 23:53:21.031708 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:53:21.031724 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:53:21.031740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:53:21.031757 kernel: pnp: PnP ACPI init Apr 24 23:53:21.031849 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:53:21.031872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:53:21.031889 kernel: NET: Registered PF_INET protocol family Apr 24 23:53:21.031906 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:53:21.031923 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 24 23:53:21.031940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:53:21.031956 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:53:21.031973 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 24 23:53:21.031990 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 24 23:53:21.032010 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:53:21.032027 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:53:21.032044 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:53:21.032061 kernel: NET: Registered PF_XDP protocol family Apr 24 23:53:21.032208 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:53:21.032333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:53:21.032456 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:53:21.032580 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 24 23:53:21.032701 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 24 23:53:21.032900 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 24 23:53:21.032923 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:53:21.032940 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:53:21.032957 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 24 23:53:21.032974 kernel: clocksource: Switched to clocksource tsc Apr 24 23:53:21.032989 kernel: Initialise system trusted keyrings Apr 24 23:53:21.033006 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 24 23:53:21.033023 kernel: Key type asymmetric registered Apr 24 23:53:21.033044 kernel: Asymmetric key parser 'x509' registered Apr 24 23:53:21.033059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:53:21.033074 kernel: io scheduler mq-deadline registered Apr 24 23:53:21.033090 kernel: io scheduler kyber registered Apr 24 23:53:21.033107 kernel: io scheduler bfq registered Apr 24 23:53:21.033123 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:53:21.033139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:53:21.033156 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:53:21.033172 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:53:21.033192 kernel: i8042: Warning: Keylock active Apr 24 23:53:21.033209 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:53:21.033225 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:53:21.033372 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 24 23:53:21.033507 kernel: rtc_cmos 00:00: registered as rtc0 Apr 24 23:53:21.033636 kernel: rtc_cmos 00:00: setting system clock to 2026-04-24T23:53:20 UTC (1777074800) Apr 24 23:53:21.033763 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 24 23:53:21.033809 kernel: intel_pstate: CPU model not supported Apr 24 23:53:21.033828 kernel: efifb: probing for efifb Apr 24 23:53:21.033841 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 24 23:53:21.033854 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 24 23:53:21.033870 kernel: efifb: scrolling: redraw Apr 24 23:53:21.033884 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:53:21.033901 kernel: Console: switching to colour frame buffer device 100x37 Apr 24 23:53:21.033916 kernel: fb0: EFI VGA frame buffer device Apr 24 23:53:21.033932 kernel: pstore: Using crash dump compression: deflate Apr 24 23:53:21.033949 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:53:21.033970 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:53:21.033987 kernel: Segment Routing with IPv6 Apr 24 23:53:21.034001 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:53:21.034016 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:53:21.034031 kernel: Key type dns_resolver registered Apr 24 23:53:21.034046 kernel: IPI shorthand broadcast: enabled Apr 24 23:53:21.034095 kernel: sched_clock: Marking stable (546002256, 184341237)->(837570772, -107227279) Apr 24 23:53:21.034114 kernel: registered taskstats version 1 Apr 24 23:53:21.034128 kernel: Loading compiled-in X.509 certificates Apr 24 23:53:21.034147 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:53:21.034162 kernel: Key type .fscrypt registered Apr 24 23:53:21.034178 kernel: Key type fscrypt-provisioning registered Apr 24 23:53:21.034194 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:53:21.034210 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:53:21.034227 kernel: ima: No architecture policies found Apr 24 23:53:21.034243 kernel: clk: Disabling unused clocks Apr 24 23:53:21.034260 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:53:21.034277 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:53:21.034298 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:53:21.034314 kernel: Run /init as init process Apr 24 23:53:21.034332 kernel: with arguments: Apr 24 23:53:21.034348 kernel: /init Apr 24 23:53:21.034366 kernel: with environment: Apr 24 23:53:21.034381 kernel: HOME=/ Apr 24 23:53:21.034395 kernel: TERM=linux Apr 24 23:53:21.034412 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:53:21.034432 systemd[1]: Detected virtualization amazon. Apr 24 23:53:21.034447 systemd[1]: Detected architecture x86-64. Apr 24 23:53:21.034464 systemd[1]: Running in initrd. Apr 24 23:53:21.034481 systemd[1]: No hostname configured, using default hostname. Apr 24 23:53:21.034498 systemd[1]: Hostname set to . Apr 24 23:53:21.034517 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:53:21.034535 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:53:21.034552 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:53:21.034572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:53:21.034591 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:53:21.034609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:53:21.034624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:53:21.034645 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:53:21.034734 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:53:21.035574 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:53:21.035600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:53:21.035620 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:53:21.035640 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:53:21.035660 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:53:21.035679 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:53:21.035703 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:53:21.035721 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:53:21.035740 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:53:21.035759 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:53:21.035799 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:53:21.035818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:53:21.035836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:53:21.035854 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:53:21.035873 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:53:21.035895 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:53:21.035914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:53:21.035932 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:53:21.035950 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:53:21.035968 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:53:21.035987 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:53:21.036005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:21.036024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:53:21.036043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:53:21.036064 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:53:21.036113 systemd-journald[179]: Collecting audit messages is disabled. Apr 24 23:53:21.036157 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:53:21.036177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:53:21.036195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:21.036215 systemd-journald[179]: Journal started Apr 24 23:53:21.036255 systemd-journald[179]: Runtime Journal (/run/log/journal/ec25dc67a7a5ea3ea5be09d163edee8a) is 4.7M, max 38.2M, 33.4M free. Apr 24 23:53:21.015913 systemd-modules-load[180]: Inserted module 'overlay' Apr 24 23:53:21.041534 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:53:21.048030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:53:21.053916 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:53:21.057189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:53:21.078891 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:53:21.079988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:53:21.085509 kernel: Bridge firewalling registered Apr 24 23:53:21.082935 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 24 23:53:21.087606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:53:21.097190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:53:21.103236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:53:21.108802 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:53:21.111009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:53:21.119062 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:53:21.120599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:53:21.130007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:53:21.135696 dracut-cmdline[211]: dracut-dracut-053 Apr 24 23:53:21.140119 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:53:21.174741 systemd-resolved[214]: Positive Trust Anchors: Apr 24 23:53:21.174759 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:53:21.174844 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:53:21.184543 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 24 23:53:21.186871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:53:21.188240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:53:21.228805 kernel: SCSI subsystem initialized Apr 24 23:53:21.238800 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:53:21.250796 kernel: iscsi: registered transport (tcp) Apr 24 23:53:21.273523 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:53:21.273997 kernel: QLogic iSCSI HBA Driver Apr 24 23:53:21.313228 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:53:21.319930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:53:21.347137 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:53:21.347224 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:53:21.347247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:53:21.390808 kernel: raid6: avx512x4 gen() 17917 MB/s Apr 24 23:53:21.408801 kernel: raid6: avx512x2 gen() 17813 MB/s Apr 24 23:53:21.426801 kernel: raid6: avx512x1 gen() 17794 MB/s Apr 24 23:53:21.444796 kernel: raid6: avx2x4 gen() 17739 MB/s Apr 24 23:53:21.462798 kernel: raid6: avx2x2 gen() 17696 MB/s Apr 24 23:53:21.481806 kernel: raid6: avx2x1 gen() 13729 MB/s Apr 24 23:53:21.481854 kernel: raid6: using algorithm avx512x4 gen() 17917 MB/s Apr 24 23:53:21.501801 kernel: raid6: .... xor() 7713 MB/s, rmw enabled Apr 24 23:53:21.501859 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:53:21.525807 kernel: xor: automatically using best checksumming function avx Apr 24 23:53:21.687814 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:53:21.699228 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:53:21.704984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:53:21.720350 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 24 23:53:21.725485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:53:21.733964 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:53:21.755189 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 24 23:53:21.787274 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:53:21.798015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:53:21.849632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:53:21.859558 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:53:21.881703 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:53:21.883198 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:53:21.883718 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:53:21.884405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:53:21.895055 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:53:21.912319 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:53:21.947355 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:53:21.947636 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:53:21.952796 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:53:21.956794 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 24 23:53:21.966803 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:8d:1f:6a:01:ff Apr 24 23:53:21.980898 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:53:21.981434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:53:21.981517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:53:21.984910 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:53:21.986831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:53:21.986921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:21.989900 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:21.999010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:22.015513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:53:22.016480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:22.024056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:22.029701 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:53:22.031803 kernel: AES CTR mode by8 optimization enabled Apr 24 23:53:22.036374 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:53:22.041045 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 24 23:53:22.053108 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:53:22.063819 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:53:22.063890 kernel: GPT:9289727 != 33554431 Apr 24 23:53:22.063911 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:53:22.063930 kernel: GPT:9289727 != 33554431 Apr 24 23:53:22.063949 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:53:22.063975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:53:22.076457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:22.085021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:53:22.107732 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:53:22.181890 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (446) Apr 24 23:53:22.214508 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (447) Apr 24 23:53:22.247240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:53:22.270728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:53:22.282201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:53:22.288261 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:53:22.288920 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:53:22.295945 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:53:22.304590 disk-uuid[633]: Primary Header is updated. Apr 24 23:53:22.304590 disk-uuid[633]: Secondary Entries is updated. Apr 24 23:53:22.304590 disk-uuid[633]: Secondary Header is updated. Apr 24 23:53:22.311808 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:53:22.319804 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:53:22.327803 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:53:23.336942 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:53:23.337016 disk-uuid[634]: The operation has completed successfully. Apr 24 23:53:23.489352 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:53:23.489499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:53:23.507021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:53:23.511578 sh[975]: Success Apr 24 23:53:23.533802 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:53:23.638191 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:53:23.644900 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:53:23.650738 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:53:23.685997 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:53:23.686080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:53:23.686102 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:53:23.689090 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:53:23.690835 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:53:23.793802 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:53:23.817426 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:53:23.818697 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:53:23.823971 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:53:23.827986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:53:23.855121 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:53:23.855191 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:53:23.855214 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:53:23.876645 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:53:23.889099 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:53:23.892995 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:53:23.903608 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:53:23.912029 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:53:23.945723 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:53:23.956977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:53:23.978544 systemd-networkd[1167]: lo: Link UP Apr 24 23:53:23.978562 systemd-networkd[1167]: lo: Gained carrier Apr 24 23:53:23.980301 systemd-networkd[1167]: Enumeration completed Apr 24 23:53:23.980428 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:53:23.981004 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:53:23.981010 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:53:23.982737 systemd[1]: Reached target network.target - Network. Apr 24 23:53:23.984290 systemd-networkd[1167]: eth0: Link UP Apr 24 23:53:23.984295 systemd-networkd[1167]: eth0: Gained carrier Apr 24 23:53:23.984306 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:53:23.994856 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.23.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:53:24.306668 ignition[1118]: Ignition 2.19.0 Apr 24 23:53:24.306686 ignition[1118]: Stage: fetch-offline Apr 24 23:53:24.307106 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:24.307120 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:24.309071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:53:24.307562 ignition[1118]: Ignition finished successfully Apr 24 23:53:24.318467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:53:24.333189 ignition[1176]: Ignition 2.19.0 Apr 24 23:53:24.333208 ignition[1176]: Stage: fetch Apr 24 23:53:24.333647 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:24.333662 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:24.333793 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:24.341459 ignition[1176]: PUT result: OK Apr 24 23:53:24.343481 ignition[1176]: parsed url from cmdline: "" Apr 24 23:53:24.343498 ignition[1176]: no config URL provided Apr 24 23:53:24.343509 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:53:24.343547 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:53:24.343571 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:24.344827 ignition[1176]: PUT result: OK Apr 24 23:53:24.344905 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 24 23:53:24.345677 ignition[1176]: GET result: OK Apr 24 23:53:24.345769 ignition[1176]: parsing config with SHA512: f651ac81639c72951652f5b913189e9a4bf3703cf6d41cd4e0353481404696548d3ab34ab5c921292835ab153624a8e2ba3d8633c4cbd427f8402b6c8800a1ba Apr 24 23:53:24.350903 unknown[1176]: fetched base config from "system" Apr 24 23:53:24.351409 ignition[1176]: fetch: fetch complete Apr 24 23:53:24.350920 unknown[1176]: fetched base config from "system" Apr 24 23:53:24.351414 ignition[1176]: fetch: fetch passed Apr 24 23:53:24.350930 unknown[1176]: fetched user config from "aws" Apr 24 23:53:24.351469 ignition[1176]: Ignition finished successfully Apr 24 23:53:24.353609 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:53:24.360013 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:53:24.377055 ignition[1182]: Ignition 2.19.0 Apr 24 23:53:24.377073 ignition[1182]: Stage: kargs Apr 24 23:53:24.377530 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:24.377545 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:24.377662 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:24.378570 ignition[1182]: PUT result: OK Apr 24 23:53:24.381557 ignition[1182]: kargs: kargs passed Apr 24 23:53:24.381633 ignition[1182]: Ignition finished successfully Apr 24 23:53:24.383589 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:53:24.387976 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:53:24.405458 ignition[1189]: Ignition 2.19.0 Apr 24 23:53:24.405478 ignition[1189]: Stage: disks Apr 24 23:53:24.405973 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:24.405989 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:24.406118 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:24.408070 ignition[1189]: PUT result: OK Apr 24 23:53:24.410487 ignition[1189]: disks: disks passed Apr 24 23:53:24.410560 ignition[1189]: Ignition finished successfully Apr 24 23:53:24.412446 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:53:24.413154 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:53:24.413498 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:53:24.414088 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:53:24.414643 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:53:24.415298 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:53:24.423017 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:53:24.460564 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:53:24.463990 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:53:24.471208 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:53:24.578980 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:53:24.579599 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:53:24.580651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:53:24.591896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:53:24.595900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:53:24.597011 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:53:24.597078 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:53:24.597111 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:53:24.605554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:53:24.611042 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:53:24.619823 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 24 23:53:24.627448 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:53:24.627519 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:53:24.627543 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:53:24.640815 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:53:24.642408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:53:24.964607 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:53:24.982940 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:53:24.989049 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:53:24.994147 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:53:25.164933 systemd-networkd[1167]: eth0: Gained IPv6LL Apr 24 23:53:25.243331 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:53:25.248900 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:53:25.252955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:53:25.260693 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:53:25.264797 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:53:25.300802 ignition[1329]: INFO : Ignition 2.19.0 Apr 24 23:53:25.300802 ignition[1329]: INFO : Stage: mount Apr 24 23:53:25.303459 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:25.303459 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:25.303459 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:25.306014 ignition[1329]: INFO : PUT result: OK Apr 24 23:53:25.309126 ignition[1329]: INFO : mount: mount passed Apr 24 23:53:25.309126 ignition[1329]: INFO : Ignition finished successfully Apr 24 23:53:25.311111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:53:25.311844 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:53:25.317938 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:53:25.337059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:53:25.358175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1342) Apr 24 23:53:25.360841 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:53:25.360875 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:53:25.363079 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:53:25.371800 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:53:25.374105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:53:25.397611 ignition[1358]: INFO : Ignition 2.19.0 Apr 24 23:53:25.398376 ignition[1358]: INFO : Stage: files Apr 24 23:53:25.399122 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:25.399122 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:25.400234 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:25.400234 ignition[1358]: INFO : PUT result: OK Apr 24 23:53:25.403364 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:53:25.414505 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:53:25.414505 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:53:25.461915 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:53:25.462961 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:53:25.463917 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:53:25.463353 unknown[1358]: wrote ssh authorized keys file for user: core Apr 24 23:53:25.474647 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:53:25.475803 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:53:25.475803 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:53:25.475803 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:53:25.559823 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:53:25.723405 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:53:25.724703 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:53:25.734410 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:53:26.223567 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:53:27.470076 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:53:27.470076 ignition[1358]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 24 23:53:27.481327 ignition[1358]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:53:27.482708 ignition[1358]: INFO : files: files passed Apr 24 23:53:27.482708 ignition[1358]: INFO : Ignition finished successfully Apr 24 23:53:27.485831 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:53:27.493821 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:53:27.498029 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:53:27.501156 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:53:27.502099 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:53:27.526589 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:53:27.526589 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:53:27.529911 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:53:27.529487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:53:27.531015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:53:27.536964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:53:27.561765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:53:27.561921 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:53:27.563572 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:53:27.564486 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:53:27.565292 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:53:27.570965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:53:27.584561 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:53:27.594006 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:53:27.606931 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:53:27.607611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:53:27.608639 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:53:27.609516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:53:27.609690 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:53:27.610961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:53:27.611844 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:53:27.612625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:53:27.613449 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:53:27.614242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:53:27.615138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:53:27.615930 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:53:27.616723 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:53:27.617939 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:53:27.618698 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:53:27.619503 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:53:27.619681 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:53:27.620844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:53:27.621659 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:53:27.622419 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:53:27.622558 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:53:27.623307 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:53:27.623474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:53:27.624911 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:53:27.625090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:53:27.625841 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:53:27.625991 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:53:27.634099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:53:27.637302 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:53:27.638089 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:53:27.638331 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:53:27.641224 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:53:27.641427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:53:27.648757 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:53:27.649454 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:53:27.660753 ignition[1412]: INFO : Ignition 2.19.0 Apr 24 23:53:27.662145 ignition[1412]: INFO : Stage: umount Apr 24 23:53:27.663141 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:53:27.663141 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:53:27.663141 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:53:27.665065 ignition[1412]: INFO : PUT result: OK Apr 24 23:53:27.669338 ignition[1412]: INFO : umount: umount passed Apr 24 23:53:27.670958 ignition[1412]: INFO : Ignition finished successfully Apr 24 23:53:27.671951 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:53:27.672143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:53:27.673879 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:53:27.674433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:53:27.675026 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:53:27.675079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:53:27.675570 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:53:27.675617 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:53:27.677538 systemd[1]: Stopped target network.target - Network. Apr 24 23:53:27.678120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:53:27.678196 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:53:27.678873 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:53:27.679436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:53:27.682876 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:53:27.683958 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:53:27.684382 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:53:27.685100 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:53:27.685161 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:53:27.685750 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:53:27.685825 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:53:27.686368 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:53:27.686434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:53:27.687159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:53:27.687218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:53:27.688007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:53:27.688668 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:53:27.691046 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:53:27.692861 systemd-networkd[1167]: eth0: DHCPv6 lease lost Apr 24 23:53:27.695143 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:53:27.695299 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:53:27.696332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:53:27.696378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:53:27.703963 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:53:27.705061 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:53:27.705137 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:53:27.706002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:53:27.710198 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:53:27.710344 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:53:27.718996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:53:27.719111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:53:27.720991 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:53:27.721037 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:53:27.723495 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:53:27.724244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:53:27.725511 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:53:27.725718 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:53:27.726716 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:53:27.727005 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:53:27.729501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:53:27.729575 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:53:27.730364 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:53:27.730411 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:53:27.731244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:53:27.731306 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:53:27.732455 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:53:27.732514 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:53:27.733581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:53:27.733642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:53:27.743094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:53:27.743574 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:53:27.743637 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:53:27.744097 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:53:27.744137 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:53:27.744488 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:53:27.744522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:53:27.744953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:53:27.745011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:27.753837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:53:27.753991 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:53:27.823437 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:53:27.823584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:53:27.824747 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:53:27.825401 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:53:27.825474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:53:27.831941 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:53:27.885543 systemd[1]: Switching root. Apr 24 23:53:27.911504 systemd-journald[179]: Journal stopped Apr 24 23:53:29.902390 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 24 23:53:29.902488 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:53:29.902509 kernel: SELinux: policy capability open_perms=1 Apr 24 23:53:29.902533 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:53:29.902552 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:53:29.902569 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:53:29.902589 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:53:29.902611 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:53:29.902645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:53:29.902670 kernel: audit: type=1403 audit(1777074808.487:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:53:29.902700 systemd[1]: Successfully loaded SELinux policy in 57.747ms. Apr 24 23:53:29.902749 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.394ms. Apr 24 23:53:29.902990 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:53:29.903019 systemd[1]: Detected virtualization amazon. Apr 24 23:53:29.903040 systemd[1]: Detected architecture x86-64. Apr 24 23:53:29.903062 systemd[1]: Detected first boot. Apr 24 23:53:29.903084 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:53:29.903180 zram_generator::config[1471]: No configuration found. Apr 24 23:53:29.903297 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:53:29.903318 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:53:29.903339 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 24 23:53:29.903371 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:53:29.903394 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:53:29.903416 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:53:29.903436 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:53:29.903464 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:53:29.903487 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:53:29.903509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:53:29.903534 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:53:29.903557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:53:29.903580 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:53:29.903604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:53:29.903630 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:53:29.903658 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:53:29.903680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:53:29.903703 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:53:29.903727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:53:29.903750 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:53:29.905901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:53:29.905951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:53:29.905976 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:53:29.906003 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:53:29.906026 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:53:29.906050 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:53:29.906073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:53:29.906096 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:53:29.906118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:53:29.906141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:53:29.906164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:53:29.906187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:53:29.906209 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:53:29.906236 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:53:29.906259 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:53:29.906283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:29.906305 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:53:29.906328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:53:29.906350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:53:29.906374 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:53:29.906398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:53:29.906425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:53:29.906449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:53:29.906470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:53:29.906493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:53:29.906517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:53:29.906541 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:53:29.906563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:53:29.906586 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:53:29.906609 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:53:29.906636 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:53:29.906658 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:53:29.906681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:53:29.906704 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:53:29.906738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:53:29.906762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:53:29.909708 kernel: loop: module loaded Apr 24 23:53:29.909740 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:29.911855 kernel: fuse: init (API version 7.39) Apr 24 23:53:29.911901 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:53:29.911924 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:53:29.911943 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:53:29.912005 systemd-journald[1585]: Collecting audit messages is disabled. Apr 24 23:53:29.912049 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:53:29.912073 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:53:29.912095 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:53:29.912122 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:53:29.912142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:53:29.912162 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:53:29.912182 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:53:29.912202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:53:29.912223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:53:29.912248 systemd-journald[1585]: Journal started Apr 24 23:53:29.912286 systemd-journald[1585]: Runtime Journal (/run/log/journal/ec25dc67a7a5ea3ea5be09d163edee8a) is 4.7M, max 38.2M, 33.4M free. Apr 24 23:53:29.920831 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:53:29.919706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:53:29.919978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:53:29.922045 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:53:29.922311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:53:29.925119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:53:29.926374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:53:29.928569 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:53:29.952385 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:53:29.959808 kernel: ACPI: bus type drm_connector registered Apr 24 23:53:29.966293 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:53:29.968107 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:53:29.977318 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:53:29.998991 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:53:30.000901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:53:30.008967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:53:30.009759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:53:30.014998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:53:30.022048 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:53:30.032367 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:53:30.032605 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:53:30.040596 systemd-journald[1585]: Time spent on flushing to /var/log/journal/ec25dc67a7a5ea3ea5be09d163edee8a is 59.376ms for 967 entries. Apr 24 23:53:30.040596 systemd-journald[1585]: System Journal (/var/log/journal/ec25dc67a7a5ea3ea5be09d163edee8a) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:53:30.115022 systemd-journald[1585]: Received client request to flush runtime journal. Apr 24 23:53:30.043648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:53:30.054011 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:53:30.055270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:53:30.060000 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:53:30.061102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:53:30.069586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:53:30.079976 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:53:30.094309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:53:30.107677 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:53:30.119455 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:53:30.152390 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:53:30.160479 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Apr 24 23:53:30.160507 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Apr 24 23:53:30.168877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:53:30.178142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:53:30.226593 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:53:30.244014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:53:30.245709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:53:30.269833 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Apr 24 23:53:30.270240 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Apr 24 23:53:30.279292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:53:30.827739 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:53:30.835950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:53:30.861450 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Apr 24 23:53:30.935603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:53:30.949656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:53:30.998998 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:53:31.016687 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 24 23:53:31.026341 (udev-worker)[1666]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:53:31.092769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:53:31.141800 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 24 23:53:31.149803 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 23:53:31.176796 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 24 23:53:31.182184 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:53:31.183048 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 24 23:53:31.212848 kernel: ACPI: button: Sleep Button [SLPF] Apr 24 23:53:31.228794 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:53:31.231342 systemd-networkd[1655]: lo: Link UP Apr 24 23:53:31.231740 systemd-networkd[1655]: lo: Gained carrier Apr 24 23:53:31.235347 systemd-networkd[1655]: Enumeration completed Apr 24 23:53:31.236292 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:53:31.236429 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:53:31.238815 systemd-networkd[1655]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:53:31.248923 systemd-networkd[1655]: eth0: Link UP Apr 24 23:53:31.249990 systemd-networkd[1655]: eth0: Gained carrier Apr 24 23:53:31.250834 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:53:31.254165 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:53:31.262840 systemd-networkd[1655]: eth0: DHCPv4 address 172.31.23.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:53:31.263188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:31.273420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:53:31.274118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:31.283792 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1664) Apr 24 23:53:31.298000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:53:31.442070 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:53:31.460183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:53:31.466977 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:53:31.480419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:53:31.493968 lvm[1775]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:53:31.521167 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:53:31.522834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:53:31.528958 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:53:31.535126 lvm[1781]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:53:31.562156 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:53:31.563905 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:53:31.564602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:53:31.564649 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:53:31.566029 systemd[1]: Reached target machines.target - Containers. Apr 24 23:53:31.568914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:53:31.575008 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:53:31.577327 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:53:31.578258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:53:31.581057 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:53:31.589046 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:53:31.598984 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:53:31.602633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:53:31.619363 kernel: loop0: detected capacity change from 0 to 61336 Apr 24 23:53:31.624204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:53:31.625367 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:53:31.636030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:53:31.727813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:53:31.751268 kernel: loop1: detected capacity change from 0 to 228704 Apr 24 23:53:31.993140 kernel: loop2: detected capacity change from 0 to 140768 Apr 24 23:53:32.116801 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:53:32.224935 kernel: loop4: detected capacity change from 0 to 61336 Apr 24 23:53:32.249546 kernel: loop5: detected capacity change from 0 to 228704 Apr 24 23:53:32.282882 kernel: loop6: detected capacity change from 0 to 140768 Apr 24 23:53:32.304963 kernel: loop7: detected capacity change from 0 to 142488 Apr 24 23:53:32.324004 (sd-merge)[1803]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 24 23:53:32.324684 (sd-merge)[1803]: Merged extensions into '/usr'. Apr 24 23:53:32.333504 systemd[1]: Reloading requested from client PID 1789 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:53:32.333526 systemd[1]: Reloading... Apr 24 23:53:32.420797 zram_generator::config[1831]: No configuration found. Apr 24 23:53:32.578160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:53:32.652950 systemd-networkd[1655]: eth0: Gained IPv6LL Apr 24 23:53:32.672383 systemd[1]: Reloading finished in 338 ms. Apr 24 23:53:32.690431 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:53:32.691689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:53:32.707016 systemd[1]: Starting ensure-sysext.service... Apr 24 23:53:32.711954 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:53:32.735509 systemd[1]: Reloading requested from client PID 1890 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:53:32.735686 systemd[1]: Reloading... Apr 24 23:53:32.746664 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:53:32.747225 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:53:32.748582 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:53:32.749028 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Apr 24 23:53:32.749118 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Apr 24 23:53:32.755069 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:53:32.755082 systemd-tmpfiles[1891]: Skipping /boot Apr 24 23:53:32.769897 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:53:32.770054 systemd-tmpfiles[1891]: Skipping /boot Apr 24 23:53:32.853442 zram_generator::config[1921]: No configuration found. Apr 24 23:53:32.994719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:53:33.075831 systemd[1]: Reloading finished in 339 ms. Apr 24 23:53:33.090503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:53:33.109973 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:53:33.115169 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:53:33.125002 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:53:33.135392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:53:33.156263 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:53:33.171887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.172563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:53:33.176101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:53:33.180912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:53:33.202142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:53:33.204645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:53:33.204883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.208319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:53:33.209068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:53:33.222204 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:53:33.222461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:53:33.236245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:53:33.237006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:53:33.242307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.247588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:53:33.264403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:53:33.268496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:53:33.273082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:53:33.273276 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:53:33.273409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.277095 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:53:33.279928 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:53:33.282721 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:53:33.284344 augenrules[2013]: No rules Apr 24 23:53:33.283992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:53:33.286926 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:53:33.290403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:53:33.293021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:53:33.327522 systemd[1]: Finished ensure-sysext.service. Apr 24 23:53:33.333423 ldconfig[1785]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:53:33.335275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.335734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:53:33.343137 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:53:33.347563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:53:33.357489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:53:33.376360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:53:33.377871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:53:33.377971 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:53:33.380527 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:53:33.381389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:53:33.384724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:53:33.384990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:53:33.386326 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:53:33.386562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:53:33.387500 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:53:33.387702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:53:33.394746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:53:33.395187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:53:33.399675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:53:33.400423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:53:33.410105 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:53:33.418515 systemd-resolved[1988]: Positive Trust Anchors: Apr 24 23:53:33.418535 systemd-resolved[1988]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:53:33.418594 systemd-resolved[1988]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:53:33.424738 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:53:33.430402 systemd-resolved[1988]: Defaulting to hostname 'linux'. Apr 24 23:53:33.432312 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:53:33.433483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:53:33.435253 systemd[1]: Reached target network.target - Network. Apr 24 23:53:33.436560 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:53:33.437211 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:53:33.437881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:53:33.438014 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:53:33.438481 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:53:33.438945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:53:33.439465 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:53:33.439942 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:53:33.440300 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:53:33.440681 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:53:33.440732 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:53:33.441093 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:53:33.442619 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:53:33.444614 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:53:33.446307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:53:33.452922 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:53:33.453700 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:53:33.454218 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:53:33.454915 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:53:33.454962 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:53:33.454995 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:53:33.458899 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:53:33.463951 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:53:33.466128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:53:33.476904 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:53:33.480572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:53:33.481533 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:53:33.488899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:53:33.491591 jq[2057]: false Apr 24 23:53:33.499927 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:53:33.511081 systemd[1]: Started ntpd.service - Network Time Service. Apr 24 23:53:33.530133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:53:33.558943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:53:33.572894 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 24 23:53:33.582988 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:53:33.601021 dbus-daemon[2055]: [system] SELinux support is enabled Apr 24 23:53:33.609387 dbus-daemon[2055]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1655 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:53:33.612228 extend-filesystems[2058]: Found loop4 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found loop5 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found loop6 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found loop7 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p1 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p2 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p3 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found usr Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p4 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p6 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p7 Apr 24 23:53:33.612228 extend-filesystems[2058]: Found nvme0n1p9 Apr 24 23:53:33.612228 extend-filesystems[2058]: Checking size of /dev/nvme0n1p9 Apr 24 23:53:33.602658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:53:33.618983 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:53:33.621943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:53:33.631042 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:53:33.644915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:53:33.648696 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:53:33.657021 ntpd[2062]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 24 23:53:33.657057 ntpd[2062]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: ---------------------------------------------------- Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: corporation. Support and training for ntp-4 are Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: available at https://www.nwtime.org/support Apr 24 23:53:33.657452 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: ---------------------------------------------------- Apr 24 23:53:33.657067 ntpd[2062]: ---------------------------------------------------- Apr 24 23:53:33.657077 ntpd[2062]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:53:33.657087 ntpd[2062]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:53:33.657097 ntpd[2062]: corporation. Support and training for ntp-4 are Apr 24 23:53:33.657107 ntpd[2062]: available at https://www.nwtime.org/support Apr 24 23:53:33.657117 ntpd[2062]: ---------------------------------------------------- Apr 24 23:53:33.669179 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:53:33.669507 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:53:33.674363 ntpd[2062]: proto: precision = 0.102 usec (-23) Apr 24 23:53:33.677016 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: proto: precision = 0.102 usec (-23) Apr 24 23:53:33.685025 ntpd[2062]: basedate set to 2026-04-12 Apr 24 23:53:33.685194 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: basedate set to 2026-04-12 Apr 24 23:53:33.685194 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: gps base set to 2026-04-12 (week 2414) Apr 24 23:53:33.685052 ntpd[2062]: gps base set to 2026-04-12 (week 2414) Apr 24 23:53:33.689246 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:53:33.689580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:53:33.700354 ntpd[2062]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen normally on 3 eth0 172.31.23.136:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen normally on 4 lo [::1]:123 Apr 24 23:53:33.701916 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listen normally on 5 eth0 [fe80::48d:1fff:fe6a:1ff%2]:123 Apr 24 23:53:33.700411 ntpd[2062]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:53:33.700615 ntpd[2062]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:53:33.700658 ntpd[2062]: Listen normally on 3 eth0 172.31.23.136:123 Apr 24 23:53:33.700700 ntpd[2062]: Listen normally on 4 lo [::1]:123 Apr 24 23:53:33.700743 ntpd[2062]: Listen normally on 5 eth0 [fe80::48d:1fff:fe6a:1ff%2]:123 Apr 24 23:53:33.703829 ntpd[2062]: Listening on routing socket on fd #22 for interface updates Apr 24 23:53:33.703946 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: Listening on routing socket on fd #22 for interface updates Apr 24 23:53:33.710448 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:53:33.710789 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:53:33.730277 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:53:33.733834 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:53:33.733834 ntpd[2062]: 24 Apr 23:53:33 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:53:33.730325 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:53:33.736801 jq[2087]: true Apr 24 23:53:33.762478 update_engine[2084]: I20260424 23:53:33.762380 2084 main.cc:92] Flatcar Update Engine starting Apr 24 23:53:33.777330 update_engine[2084]: I20260424 23:53:33.770962 2084 update_check_scheduler.cc:74] Next update check in 3m9s Apr 24 23:53:33.777390 extend-filesystems[2058]: Resized partition /dev/nvme0n1p9 Apr 24 23:53:33.781432 extend-filesystems[2114]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:53:33.795743 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 24 23:53:33.794526 (ntainerd)[2101]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:53:33.801684 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 23:53:33.804209 tar[2094]: linux-amd64/LICENSE Apr 24 23:53:33.804209 tar[2094]: linux-amd64/helm Apr 24 23:53:33.802996 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:53:33.810024 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:53:33.810062 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:53:33.821203 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:53:33.821767 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:53:33.821818 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:53:33.825128 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:53:33.829973 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:53:33.847314 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:53:33.854834 jq[2110]: true Apr 24 23:53:33.886007 coreos-metadata[2054]: Apr 24 23:53:33.874 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:53:33.892480 coreos-metadata[2054]: Apr 24 23:53:33.890 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 24 23:53:33.893870 coreos-metadata[2054]: Apr 24 23:53:33.892 INFO Fetch successful Apr 24 23:53:33.893870 coreos-metadata[2054]: Apr 24 23:53:33.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 24 23:53:33.896404 coreos-metadata[2054]: Apr 24 23:53:33.896 INFO Fetch successful Apr 24 23:53:33.896404 coreos-metadata[2054]: Apr 24 23:53:33.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 24 23:53:33.898004 coreos-metadata[2054]: Apr 24 23:53:33.897 INFO Fetch successful Apr 24 23:53:33.898004 coreos-metadata[2054]: Apr 24 23:53:33.897 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 24 23:53:33.899372 coreos-metadata[2054]: Apr 24 23:53:33.899 INFO Fetch successful Apr 24 23:53:33.899372 coreos-metadata[2054]: Apr 24 23:53:33.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 24 23:53:33.901237 coreos-metadata[2054]: Apr 24 23:53:33.901 INFO Fetch failed with 404: resource not found Apr 24 23:53:33.901237 coreos-metadata[2054]: Apr 24 23:53:33.901 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 24 23:53:33.912362 coreos-metadata[2054]: Apr 24 23:53:33.910 INFO Fetch successful Apr 24 23:53:33.912362 coreos-metadata[2054]: Apr 24 23:53:33.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 24 23:53:33.907382 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 24 23:53:33.913240 coreos-metadata[2054]: Apr 24 23:53:33.913 INFO Fetch successful Apr 24 23:53:33.913240 coreos-metadata[2054]: Apr 24 23:53:33.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 24 23:53:33.915281 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 24 23:53:33.916932 coreos-metadata[2054]: Apr 24 23:53:33.915 INFO Fetch successful Apr 24 23:53:33.916932 coreos-metadata[2054]: Apr 24 23:53:33.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 24 23:53:33.918289 coreos-metadata[2054]: Apr 24 23:53:33.918 INFO Fetch successful Apr 24 23:53:33.918289 coreos-metadata[2054]: Apr 24 23:53:33.918 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 24 23:53:33.928509 coreos-metadata[2054]: Apr 24 23:53:33.925 INFO Fetch successful Apr 24 23:53:33.982752 systemd-logind[2083]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:53:33.989840 systemd-logind[2083]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 24 23:53:33.989894 systemd-logind[2083]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:53:33.993525 systemd-logind[2083]: New seat seat0. Apr 24 23:53:34.003217 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:53:34.017644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:53:34.022591 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:53:34.056276 bash[2158]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:53:34.059303 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:53:34.072310 systemd[1]: Starting sshkeys.service... Apr 24 23:53:34.103794 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 24 23:53:34.127098 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:53:34.137665 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:53:34.163733 amazon-ssm-agent[2141]: Initializing new seelog logger Apr 24 23:53:34.165014 amazon-ssm-agent[2141]: New Seelog Logger Creation Complete Apr 24 23:53:34.165181 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.165233 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.171286 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 processing appconfig overrides Apr 24 23:53:34.175806 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.175806 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.175806 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 processing appconfig overrides Apr 24 23:53:34.175806 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.175806 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.183065 extend-filesystems[2114]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 24 23:53:34.183065 extend-filesystems[2114]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 24 23:53:34.183065 extend-filesystems[2114]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 24 23:53:34.228754 extend-filesystems[2058]: Resized filesystem in /dev/nvme0n1p9 Apr 24 23:53:34.232928 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO Proxy environment variables: Apr 24 23:53:34.232928 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 processing appconfig overrides Apr 24 23:53:34.189079 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:53:34.189422 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:53:34.240970 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.240970 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:53:34.240970 amazon-ssm-agent[2141]: 2026/04/24 23:53:34 processing appconfig overrides Apr 24 23:53:34.271802 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2177) Apr 24 23:53:34.290903 coreos-metadata[2171]: Apr 24 23:53:34.290 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:53:34.293954 coreos-metadata[2171]: Apr 24 23:53:34.292 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 24 23:53:34.295560 coreos-metadata[2171]: Apr 24 23:53:34.294 INFO Fetch successful Apr 24 23:53:34.295560 coreos-metadata[2171]: Apr 24 23:53:34.295 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 24 23:53:34.299989 coreos-metadata[2171]: Apr 24 23:53:34.297 INFO Fetch successful Apr 24 23:53:34.308104 unknown[2171]: wrote ssh authorized keys file for user: core Apr 24 23:53:34.314394 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO https_proxy: Apr 24 23:53:34.405957 update-ssh-keys[2193]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:53:34.407746 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:53:34.420897 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO http_proxy: Apr 24 23:53:34.425294 systemd[1]: Finished sshkeys.service. Apr 24 23:53:34.510083 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:53:34.510265 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:53:34.515755 dbus-daemon[2055]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2124 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:53:34.520851 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO no_proxy: Apr 24 23:53:34.533206 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:53:34.596412 locksmithd[2125]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:53:34.599601 polkitd[2248]: Started polkitd version 121 Apr 24 23:53:34.613445 polkitd[2248]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:53:34.613524 polkitd[2248]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:53:34.614116 polkitd[2248]: Finished loading, compiling and executing 2 rules Apr 24 23:53:34.623354 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:53:34.623607 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:53:34.624816 polkitd[2248]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:53:34.643251 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO Checking if agent identity type OnPrem can be assumed Apr 24 23:53:34.712120 systemd-hostnamed[2124]: Hostname set to (transient) Apr 24 23:53:34.712240 systemd-resolved[1988]: System hostname changed to 'ip-172-31-23-136'. Apr 24 23:53:34.734464 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO Checking if agent identity type EC2 can be assumed Apr 24 23:53:34.837469 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO Agent will take identity from EC2 Apr 24 23:53:34.879128 containerd[2101]: time="2026-04-24T23:53:34.878989453Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:53:34.927809 containerd[2101]: time="2026-04-24T23:53:34.927496869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.929768723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.929828739Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.929851497Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.930038654Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.930060501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.930140013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:53:34.930244 containerd[2101]: time="2026-04-24T23:53:34.930159208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.930946063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.930976900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.930999579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.931015236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.931129119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931418 containerd[2101]: time="2026-04-24T23:53:34.931383440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931915 containerd[2101]: time="2026-04-24T23:53:34.931892096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:53:34.931989 containerd[2101]: time="2026-04-24T23:53:34.931975712Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:53:34.932136 containerd[2101]: time="2026-04-24T23:53:34.932121752Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:53:34.932277 containerd[2101]: time="2026-04-24T23:53:34.932230880Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:53:34.936307 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938225816Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938299757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938323137Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938387258Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938411998Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:53:34.938685 containerd[2101]: time="2026-04-24T23:53:34.938612324Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.945891698Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946131620Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946166838Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946195439Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946223969Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946250938Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946271846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946299392Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946327546Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946353715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946378518Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946402823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946439013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.946793 containerd[2101]: time="2026-04-24T23:53:34.946467001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946489306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946517113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946542489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946568694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946593819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946626545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946663266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946687741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946712760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946736862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.947326 containerd[2101]: time="2026-04-24T23:53:34.946761618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.949837489Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.949909444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.949954229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.949980804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950069578Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950117032Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950141401Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950166942Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950202687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950225256Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950246765Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:53:34.951794 containerd[2101]: time="2026-04-24T23:53:34.950282728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:53:34.952288 containerd[2101]: time="2026-04-24T23:53:34.950895585Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:53:34.952288 containerd[2101]: time="2026-04-24T23:53:34.951049254Z" level=info msg="Connect containerd service" Apr 24 23:53:34.952288 containerd[2101]: time="2026-04-24T23:53:34.951094653Z" level=info msg="using legacy CRI server" Apr 24 23:53:34.952288 containerd[2101]: time="2026-04-24T23:53:34.951106019Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:53:34.952288 containerd[2101]: time="2026-04-24T23:53:34.951231966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955351110Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955685250Z" level=info msg="Start subscribing containerd event" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955752214Z" level=info msg="Start recovering state" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955881605Z" level=info msg="Start event monitor" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955907847Z" level=info msg="Start snapshots syncer" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955920409Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.955930914Z" level=info msg="Start streaming server" Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.956411616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:53:34.957494 containerd[2101]: time="2026-04-24T23:53:34.956515024Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:53:34.956756 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:53:34.962516 containerd[2101]: time="2026-04-24T23:53:34.961211308Z" level=info msg="containerd successfully booted in 0.083384s" Apr 24 23:53:35.037216 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:53:35.138860 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:53:35.203853 sshd_keygen[2093]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:53:35.236124 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 24 23:53:35.281310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:53:35.298184 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:53:35.323835 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:53:35.324373 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:53:35.335894 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 24 23:53:35.338248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:53:35.377348 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:53:35.389224 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:53:35.403216 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:53:35.404156 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:53:35.436707 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] Starting Core Agent Apr 24 23:53:35.536261 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 24 23:53:35.638160 tar[2094]: linux-amd64/README.md Apr 24 23:53:35.641861 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [Registrar] Starting registrar module Apr 24 23:53:35.656634 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:53:35.686614 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:53:35.694452 systemd[1]: Started sshd@0-172.31.23.136:22-4.175.71.9:44836.service - OpenSSH per-connection server daemon (4.175.71.9:44836). Apr 24 23:53:35.742222 amazon-ssm-agent[2141]: 2026-04-24 23:53:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 24 23:53:35.993809 amazon-ssm-agent[2141]: 2026-04-24 23:53:35 INFO [EC2Identity] EC2 registration was successful. Apr 24 23:53:36.020205 amazon-ssm-agent[2141]: 2026-04-24 23:53:35 INFO [CredentialRefresher] credentialRefresher has started Apr 24 23:53:36.020205 amazon-ssm-agent[2141]: 2026-04-24 23:53:35 INFO [CredentialRefresher] Starting credentials refresher loop Apr 24 23:53:36.020205 amazon-ssm-agent[2141]: 2026-04-24 23:53:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 24 23:53:36.095028 amazon-ssm-agent[2141]: 2026-04-24 23:53:36 INFO [CredentialRefresher] Next credential rotation will be in 30.766660637116665 minutes Apr 24 23:53:36.724106 sshd[2337]: Accepted publickey for core from 4.175.71.9 port 44836 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:36.727071 sshd[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:36.736953 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:53:36.744820 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:53:36.749826 systemd-logind[2083]: New session 1 of user core. Apr 24 23:53:36.765817 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:53:36.776105 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:53:36.784625 (systemd)[2344]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:53:36.905748 systemd[2344]: Queued start job for default target default.target. Apr 24 23:53:36.906263 systemd[2344]: Created slice app.slice - User Application Slice. Apr 24 23:53:36.906303 systemd[2344]: Reached target paths.target - Paths. Apr 24 23:53:36.906321 systemd[2344]: Reached target timers.target - Timers. Apr 24 23:53:36.911882 systemd[2344]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:53:36.921482 systemd[2344]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:53:36.921713 systemd[2344]: Reached target sockets.target - Sockets. Apr 24 23:53:36.921844 systemd[2344]: Reached target basic.target - Basic System. Apr 24 23:53:36.922139 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:53:36.925314 systemd[2344]: Reached target default.target - Main User Target. Apr 24 23:53:36.925491 systemd[2344]: Startup finished in 134ms. Apr 24 23:53:36.929132 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:53:37.032922 amazon-ssm-agent[2141]: 2026-04-24 23:53:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 24 23:53:37.133230 amazon-ssm-agent[2141]: 2026-04-24 23:53:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2356) started Apr 24 23:53:37.233419 amazon-ssm-agent[2141]: 2026-04-24 23:53:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 24 23:53:37.620152 systemd[1]: Started sshd@1-172.31.23.136:22-4.175.71.9:44842.service - OpenSSH per-connection server daemon (4.175.71.9:44842). Apr 24 23:53:38.449994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:53:38.453371 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:53:38.457162 systemd[1]: Startup finished in 8.445s (kernel) + 10.024s (userspace) = 18.470s. Apr 24 23:53:38.461956 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:53:38.596683 sshd[2368]: Accepted publickey for core from 4.175.71.9 port 44842 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:38.598517 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:38.604837 systemd-logind[2083]: New session 2 of user core. Apr 24 23:53:38.614433 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:53:39.275976 sshd[2368]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:39.282662 systemd[1]: sshd@1-172.31.23.136:22-4.175.71.9:44842.service: Deactivated successfully. Apr 24 23:53:39.282717 systemd-logind[2083]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:53:39.285727 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:53:39.287155 systemd-logind[2083]: Removed session 2. Apr 24 23:53:39.437162 systemd[1]: Started sshd@2-172.31.23.136:22-4.175.71.9:44846.service - OpenSSH per-connection server daemon (4.175.71.9:44846). Apr 24 23:53:40.382411 sshd[2393]: Accepted publickey for core from 4.175.71.9 port 44846 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:40.384717 sshd[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:40.387173 kubelet[2378]: E0424 23:53:40.387091 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:53:40.390394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:53:40.390676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:53:40.397657 systemd-logind[2083]: New session 3 of user core. Apr 24 23:53:40.407247 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:53:41.596733 systemd-resolved[1988]: Clock change detected. Flushing caches. Apr 24 23:53:41.974197 sshd[2393]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:41.978083 systemd[1]: sshd@2-172.31.23.136:22-4.175.71.9:44846.service: Deactivated successfully. Apr 24 23:53:41.983624 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:53:41.984574 systemd-logind[2083]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:53:41.985789 systemd-logind[2083]: Removed session 3. Apr 24 23:53:42.155636 systemd[1]: Started sshd@3-172.31.23.136:22-4.175.71.9:44854.service - OpenSSH per-connection server daemon (4.175.71.9:44854). Apr 24 23:53:43.172942 sshd[2404]: Accepted publickey for core from 4.175.71.9 port 44854 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:43.173641 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:43.178988 systemd-logind[2083]: New session 4 of user core. Apr 24 23:53:43.184713 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:53:43.880021 sshd[2404]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:43.883542 systemd[1]: sshd@3-172.31.23.136:22-4.175.71.9:44854.service: Deactivated successfully. Apr 24 23:53:43.889521 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:53:43.890391 systemd-logind[2083]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:53:43.892448 systemd-logind[2083]: Removed session 4. Apr 24 23:53:44.040669 systemd[1]: Started sshd@4-172.31.23.136:22-4.175.71.9:44858.service - OpenSSH per-connection server daemon (4.175.71.9:44858). Apr 24 23:53:44.986981 sshd[2412]: Accepted publickey for core from 4.175.71.9 port 44858 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:44.987677 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:44.992610 systemd-logind[2083]: New session 5 of user core. Apr 24 23:53:45.003677 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:53:45.516964 sudo[2416]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:53:45.517395 sudo[2416]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:53:45.530870 sudo[2416]: pam_unix(sudo:session): session closed for user root Apr 24 23:53:45.685520 sshd[2412]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:45.689864 systemd[1]: sshd@4-172.31.23.136:22-4.175.71.9:44858.service: Deactivated successfully. Apr 24 23:53:45.695064 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:53:45.696193 systemd-logind[2083]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:53:45.697422 systemd-logind[2083]: Removed session 5. Apr 24 23:53:45.865619 systemd[1]: Started sshd@5-172.31.23.136:22-4.175.71.9:48898.service - OpenSSH per-connection server daemon (4.175.71.9:48898). Apr 24 23:53:46.873695 sshd[2421]: Accepted publickey for core from 4.175.71.9 port 48898 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:46.875243 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:46.880755 systemd-logind[2083]: New session 6 of user core. Apr 24 23:53:46.886863 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:53:47.410930 sudo[2426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:53:47.411388 sudo[2426]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:53:47.415713 sudo[2426]: pam_unix(sudo:session): session closed for user root Apr 24 23:53:47.421538 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:53:47.421940 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:53:47.447036 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:53:47.449399 auditctl[2429]: No rules Apr 24 23:53:47.449822 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:53:47.450099 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:53:47.461801 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:53:47.489077 augenrules[2448]: No rules Apr 24 23:53:47.490956 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:53:47.494544 sudo[2425]: pam_unix(sudo:session): session closed for user root Apr 24 23:53:47.660536 sshd[2421]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:47.663874 systemd[1]: sshd@5-172.31.23.136:22-4.175.71.9:48898.service: Deactivated successfully. Apr 24 23:53:47.668581 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:53:47.670664 systemd-logind[2083]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:53:47.671874 systemd-logind[2083]: Removed session 6. Apr 24 23:53:47.825689 systemd[1]: Started sshd@6-172.31.23.136:22-4.175.71.9:48906.service - OpenSSH per-connection server daemon (4.175.71.9:48906). Apr 24 23:53:48.798574 sshd[2457]: Accepted publickey for core from 4.175.71.9 port 48906 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:53:48.800215 sshd[2457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:48.806040 systemd-logind[2083]: New session 7 of user core. Apr 24 23:53:48.813599 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:53:49.319326 sudo[2461]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:53:49.319739 sudo[2461]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:53:50.447639 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:53:50.450775 (dockerd)[2476]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:53:51.168095 dockerd[2476]: time="2026-04-24T23:53:51.168024129Z" level=info msg="Starting up" Apr 24 23:53:51.363828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:53:51.374011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:53:51.407447 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2592834023-merged.mount: Deactivated successfully. Apr 24 23:53:52.010650 dockerd[2476]: time="2026-04-24T23:53:52.010604736Z" level=info msg="Loading containers: start." Apr 24 23:53:52.045522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:53:52.052825 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:53:52.122086 kubelet[2509]: E0424 23:53:52.122046 2509 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:53:52.128765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:53:52.128997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:53:52.204412 kernel: Initializing XFRM netlink socket Apr 24 23:53:52.277083 (udev-worker)[2516]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:53:52.345518 systemd-networkd[1655]: docker0: Link UP Apr 24 23:53:52.363871 dockerd[2476]: time="2026-04-24T23:53:52.363824634Z" level=info msg="Loading containers: done." Apr 24 23:53:52.397322 dockerd[2476]: time="2026-04-24T23:53:52.397228217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:53:52.397598 dockerd[2476]: time="2026-04-24T23:53:52.397404782Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:53:52.397598 dockerd[2476]: time="2026-04-24T23:53:52.397538428Z" level=info msg="Daemon has completed initialization" Apr 24 23:53:52.437289 dockerd[2476]: time="2026-04-24T23:53:52.436857278Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:53:52.437239 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:53:53.976954 containerd[2101]: time="2026-04-24T23:53:53.976912080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:53:54.574227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673751235.mount: Deactivated successfully. Apr 24 23:53:56.607673 containerd[2101]: time="2026-04-24T23:53:56.607613184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:56.609375 containerd[2101]: time="2026-04-24T23:53:56.609305252Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 24 23:53:56.611334 containerd[2101]: time="2026-04-24T23:53:56.611237880Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:56.616199 containerd[2101]: time="2026-04-24T23:53:56.615584935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:56.617168 containerd[2101]: time="2026-04-24T23:53:56.617128232Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.640168679s" Apr 24 23:53:56.617255 containerd[2101]: time="2026-04-24T23:53:56.617177647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:53:56.618367 containerd[2101]: time="2026-04-24T23:53:56.618339101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:53:58.608763 containerd[2101]: time="2026-04-24T23:53:58.608708003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:58.615291 containerd[2101]: time="2026-04-24T23:53:58.614387822Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 24 23:53:58.615291 containerd[2101]: time="2026-04-24T23:53:58.614528642Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:58.618316 containerd[2101]: time="2026-04-24T23:53:58.618256533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:58.619751 containerd[2101]: time="2026-04-24T23:53:58.619471925Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 2.001091882s" Apr 24 23:53:58.619751 containerd[2101]: time="2026-04-24T23:53:58.619512426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:53:58.620305 containerd[2101]: time="2026-04-24T23:53:58.620043053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:54:00.206488 containerd[2101]: time="2026-04-24T23:54:00.206428275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:00.207978 containerd[2101]: time="2026-04-24T23:54:00.207920472Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 24 23:54:00.209520 containerd[2101]: time="2026-04-24T23:54:00.208992986Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:00.212122 containerd[2101]: time="2026-04-24T23:54:00.212083579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:00.213542 containerd[2101]: time="2026-04-24T23:54:00.213492956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.593413734s" Apr 24 23:54:00.213681 containerd[2101]: time="2026-04-24T23:54:00.213658453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:54:00.214300 containerd[2101]: time="2026-04-24T23:54:00.214260924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:54:01.402222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780008638.mount: Deactivated successfully. Apr 24 23:54:02.367548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:54:02.391930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:54:02.962489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:02.971929 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:54:03.039065 kubelet[2722]: E0424 23:54:03.038651 2722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:54:03.042466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:54:03.042844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:54:03.337544 containerd[2101]: time="2026-04-24T23:54:03.336914900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:03.339309 containerd[2101]: time="2026-04-24T23:54:03.339216119Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 24 23:54:03.341899 containerd[2101]: time="2026-04-24T23:54:03.340758161Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:03.345615 containerd[2101]: time="2026-04-24T23:54:03.345563728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:03.346693 containerd[2101]: time="2026-04-24T23:54:03.346645725Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 3.132331441s" Apr 24 23:54:03.346859 containerd[2101]: time="2026-04-24T23:54:03.346698778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:54:03.347261 containerd[2101]: time="2026-04-24T23:54:03.347230370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:54:03.854963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181602310.mount: Deactivated successfully. Apr 24 23:54:05.055201 containerd[2101]: time="2026-04-24T23:54:05.055130447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.057436 containerd[2101]: time="2026-04-24T23:54:05.057370974Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 24 23:54:05.059453 containerd[2101]: time="2026-04-24T23:54:05.059388334Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.065090 containerd[2101]: time="2026-04-24T23:54:05.064809802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.066366 containerd[2101]: time="2026-04-24T23:54:05.066150744Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.718875891s" Apr 24 23:54:05.066366 containerd[2101]: time="2026-04-24T23:54:05.066199220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:54:05.067391 containerd[2101]: time="2026-04-24T23:54:05.067348704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:54:05.549841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227237029.mount: Deactivated successfully. Apr 24 23:54:05.557239 containerd[2101]: time="2026-04-24T23:54:05.557184778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.558446 containerd[2101]: time="2026-04-24T23:54:05.558215806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 24 23:54:05.560663 containerd[2101]: time="2026-04-24T23:54:05.560594255Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.563428 containerd[2101]: time="2026-04-24T23:54:05.563375191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:05.564922 containerd[2101]: time="2026-04-24T23:54:05.564308665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.920256ms" Apr 24 23:54:05.564922 containerd[2101]: time="2026-04-24T23:54:05.564351617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:54:05.565214 containerd[2101]: time="2026-04-24T23:54:05.565182166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:54:05.684095 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:54:06.083347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404224165.mount: Deactivated successfully. Apr 24 23:54:07.432020 containerd[2101]: time="2026-04-24T23:54:07.431956139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:07.433471 containerd[2101]: time="2026-04-24T23:54:07.433301909Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 24 23:54:07.435303 containerd[2101]: time="2026-04-24T23:54:07.434764929Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:07.438481 containerd[2101]: time="2026-04-24T23:54:07.438438317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:07.439895 containerd[2101]: time="2026-04-24T23:54:07.439854508Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.874633297s" Apr 24 23:54:07.440061 containerd[2101]: time="2026-04-24T23:54:07.440039297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:54:10.717304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:10.724641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:54:10.766851 systemd[1]: Reloading requested from client PID 2883 ('systemctl') (unit session-7.scope)... Apr 24 23:54:10.766877 systemd[1]: Reloading... Apr 24 23:54:10.882299 zram_generator::config[2920]: No configuration found. Apr 24 23:54:11.059634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:54:11.168945 systemd[1]: Reloading finished in 401 ms. Apr 24 23:54:11.213988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:54:11.214339 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:54:11.214898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:11.225033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:54:11.606552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:11.618874 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:54:11.674345 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:54:11.674345 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:54:11.674345 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:54:11.674904 kubelet[2996]: I0424 23:54:11.674439 2996 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:54:12.310313 kubelet[2996]: I0424 23:54:12.308602 2996 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:54:12.310313 kubelet[2996]: I0424 23:54:12.308792 2996 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:54:12.310313 kubelet[2996]: I0424 23:54:12.309106 2996 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:54:12.352505 kubelet[2996]: I0424 23:54:12.352211 2996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:54:12.356176 kubelet[2996]: E0424 23:54:12.356118 2996 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:54:12.367946 kubelet[2996]: E0424 23:54:12.367899 2996 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:54:12.367946 kubelet[2996]: I0424 23:54:12.367939 2996 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:54:12.378580 kubelet[2996]: I0424 23:54:12.378538 2996 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:54:12.382446 kubelet[2996]: I0424 23:54:12.382378 2996 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:54:12.387280 kubelet[2996]: I0424 23:54:12.382449 2996 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:54:12.387474 kubelet[2996]: I0424 23:54:12.387292 2996 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:54:12.387474 kubelet[2996]: I0424 23:54:12.387327 2996 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:54:12.387605 kubelet[2996]: I0424 23:54:12.387521 2996 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:54:12.395372 kubelet[2996]: I0424 23:54:12.395323 2996 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:54:12.395372 kubelet[2996]: I0424 23:54:12.395366 2996 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:54:12.396571 kubelet[2996]: I0424 23:54:12.395402 2996 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:54:12.396571 kubelet[2996]: I0424 23:54:12.395423 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:54:12.416529 kubelet[2996]: E0424 23:54:12.416489 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:54:12.417463 kubelet[2996]: I0424 23:54:12.417435 2996 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:54:12.418122 kubelet[2996]: I0424 23:54:12.418092 2996 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:54:12.419852 kubelet[2996]: E0424 23:54:12.419642 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-136&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:54:12.420376 kubelet[2996]: W0424 23:54:12.420357 2996 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:54:12.432169 kubelet[2996]: I0424 23:54:12.432117 2996 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:54:12.432310 kubelet[2996]: I0424 23:54:12.432234 2996 server.go:1289] "Started kubelet" Apr 24 23:54:12.435659 kubelet[2996]: I0424 23:54:12.434667 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:54:12.438179 kubelet[2996]: I0424 23:54:12.437200 2996 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:54:12.438536 kubelet[2996]: I0424 23:54:12.438518 2996 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:54:12.445289 kubelet[2996]: I0424 23:54:12.444572 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:54:12.445795 kubelet[2996]: I0424 23:54:12.445701 2996 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:54:12.447146 kubelet[2996]: I0424 23:54:12.447123 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:54:12.449744 kubelet[2996]: I0424 23:54:12.448930 2996 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:54:12.449744 kubelet[2996]: E0424 23:54:12.449222 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-136\" not found" Apr 24 23:54:12.449928 kubelet[2996]: E0424 23:54:12.447432 2996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.136:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.136:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-136.18a97029b67fb9d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-136,UID:ip-172-31-23-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-136,},FirstTimestamp:2026-04-24 23:54:12.432165331 +0000 UTC m=+0.807536085,LastTimestamp:2026-04-24 23:54:12.432165331 +0000 UTC m=+0.807536085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-136,}" Apr 24 23:54:12.454671 kubelet[2996]: I0424 23:54:12.453539 2996 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:54:12.454671 kubelet[2996]: I0424 23:54:12.453604 2996 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:54:12.454894 kubelet[2996]: E0424 23:54:12.454258 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:54:12.455451 kubelet[2996]: E0424 23:54:12.455407 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="200ms" Apr 24 23:54:12.461191 kubelet[2996]: E0424 23:54:12.461162 2996 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:54:12.463298 kubelet[2996]: I0424 23:54:12.461615 2996 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:54:12.463656 kubelet[2996]: I0424 23:54:12.463639 2996 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:54:12.463876 kubelet[2996]: I0424 23:54:12.463854 2996 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:54:12.484331 kubelet[2996]: I0424 23:54:12.483428 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:54:12.487307 kubelet[2996]: I0424 23:54:12.486355 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:54:12.487307 kubelet[2996]: I0424 23:54:12.486380 2996 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:54:12.487307 kubelet[2996]: I0424 23:54:12.486409 2996 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:54:12.487307 kubelet[2996]: I0424 23:54:12.486418 2996 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:54:12.487307 kubelet[2996]: E0424 23:54:12.486466 2996 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:54:12.497064 kubelet[2996]: E0424 23:54:12.496965 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:54:12.510823 kubelet[2996]: I0424 23:54:12.510792 2996 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:54:12.510823 kubelet[2996]: I0424 23:54:12.510818 2996 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:54:12.511008 kubelet[2996]: I0424 23:54:12.510837 2996 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:54:12.518160 kubelet[2996]: I0424 23:54:12.518124 2996 policy_none.go:49] "None policy: Start" Apr 24 23:54:12.518160 kubelet[2996]: I0424 23:54:12.518159 2996 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:54:12.518362 kubelet[2996]: I0424 23:54:12.518177 2996 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:54:12.535307 kubelet[2996]: E0424 23:54:12.535232 2996 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:54:12.536769 kubelet[2996]: I0424 23:54:12.535855 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:54:12.536769 kubelet[2996]: I0424 23:54:12.535879 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:54:12.537619 kubelet[2996]: I0424 23:54:12.537604 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:54:12.539207 kubelet[2996]: E0424 23:54:12.539181 2996 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:54:12.542311 kubelet[2996]: E0424 23:54:12.540343 2996 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-136\" not found" Apr 24 23:54:12.606364 kubelet[2996]: E0424 23:54:12.606239 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:12.629098 kubelet[2996]: E0424 23:54:12.629054 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:12.638357 kubelet[2996]: I0424 23:54:12.638319 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:12.638714 kubelet[2996]: E0424 23:54:12.638683 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:12.643242 kubelet[2996]: E0424 23:54:12.643117 2996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.136:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.136:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-136.18a97029b67fb9d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-136,UID:ip-172-31-23-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-136,},FirstTimestamp:2026-04-24 23:54:12.432165331 +0000 UTC m=+0.807536085,LastTimestamp:2026-04-24 23:54:12.432165331 +0000 UTC m=+0.807536085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-136,}" Apr 24 23:54:12.651397 kubelet[2996]: E0424 23:54:12.651143 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:12.654762 kubelet[2996]: I0424 23:54:12.654719 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:12.654762 kubelet[2996]: I0424 23:54:12.654759 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:12.654937 kubelet[2996]: I0424 23:54:12.654783 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:12.654937 kubelet[2996]: I0424 23:54:12.654808 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:12.654937 kubelet[2996]: I0424 23:54:12.654830 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:12.654937 kubelet[2996]: I0424 23:54:12.654854 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f9ce25d9fa880ccb4775a2154155a29-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-136\" (UID: \"0f9ce25d9fa880ccb4775a2154155a29\") " pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:12.654937 kubelet[2996]: I0424 23:54:12.654875 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-ca-certs\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:12.655109 kubelet[2996]: I0424 23:54:12.654897 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:12.655109 kubelet[2996]: I0424 23:54:12.654924 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:12.656173 kubelet[2996]: E0424 23:54:12.656130 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="400ms" Apr 24 23:54:12.841594 kubelet[2996]: I0424 23:54:12.841560 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:12.842143 kubelet[2996]: E0424 23:54:12.841944 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:12.908381 containerd[2101]: time="2026-04-24T23:54:12.908328879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-136,Uid:b4f5d8cd872af89bd8e685ace8a0e357,Namespace:kube-system,Attempt:0,}" Apr 24 23:54:12.934625 containerd[2101]: time="2026-04-24T23:54:12.934562307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-136,Uid:a41dd6e21b026467a8a3346cfe58cb46,Namespace:kube-system,Attempt:0,}" Apr 24 23:54:12.953261 containerd[2101]: time="2026-04-24T23:54:12.953212291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-136,Uid:0f9ce25d9fa880ccb4775a2154155a29,Namespace:kube-system,Attempt:0,}" Apr 24 23:54:13.056872 kubelet[2996]: E0424 23:54:13.056819 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="800ms" Apr 24 23:54:13.244197 kubelet[2996]: I0424 23:54:13.244093 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:13.244498 kubelet[2996]: E0424 23:54:13.244464 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:13.377461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454241634.mount: Deactivated successfully. Apr 24 23:54:13.385247 containerd[2101]: time="2026-04-24T23:54:13.385193998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:54:13.386715 containerd[2101]: time="2026-04-24T23:54:13.386654397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 24 23:54:13.388014 containerd[2101]: time="2026-04-24T23:54:13.387974589Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:54:13.389078 containerd[2101]: time="2026-04-24T23:54:13.389036117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:54:13.390150 containerd[2101]: time="2026-04-24T23:54:13.390106390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:54:13.391675 containerd[2101]: time="2026-04-24T23:54:13.391632774Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:54:13.392676 containerd[2101]: time="2026-04-24T23:54:13.392570114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:54:13.396041 containerd[2101]: time="2026-04-24T23:54:13.395988843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:54:13.398291 containerd[2101]: time="2026-04-24T23:54:13.396913020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 443.591894ms" Apr 24 23:54:13.400138 containerd[2101]: time="2026-04-24T23:54:13.400097527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.436832ms" Apr 24 23:54:13.400955 containerd[2101]: time="2026-04-24T23:54:13.400917896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.494289ms" Apr 24 23:54:13.664866 kubelet[2996]: E0424 23:54:13.664822 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:54:13.705230 containerd[2101]: time="2026-04-24T23:54:13.704532593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:13.705230 containerd[2101]: time="2026-04-24T23:54:13.704621598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:13.705230 containerd[2101]: time="2026-04-24T23:54:13.704644709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.705230 containerd[2101]: time="2026-04-24T23:54:13.704776178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.707451 containerd[2101]: time="2026-04-24T23:54:13.707053637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:13.707451 containerd[2101]: time="2026-04-24T23:54:13.707127174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:13.707451 containerd[2101]: time="2026-04-24T23:54:13.707171713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.708080 containerd[2101]: time="2026-04-24T23:54:13.708014576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.715674 containerd[2101]: time="2026-04-24T23:54:13.711607531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:13.715674 containerd[2101]: time="2026-04-24T23:54:13.711687300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:13.715674 containerd[2101]: time="2026-04-24T23:54:13.711711172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.715674 containerd[2101]: time="2026-04-24T23:54:13.711824716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:13.852191 kubelet[2996]: E0424 23:54:13.852150 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-136&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:54:13.857671 kubelet[2996]: E0424 23:54:13.857598 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="1.6s" Apr 24 23:54:13.864575 containerd[2101]: time="2026-04-24T23:54:13.864529089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-136,Uid:b4f5d8cd872af89bd8e685ace8a0e357,Namespace:kube-system,Attempt:0,} returns sandbox id \"b042d6bf21127297b2e4ca1975ea08d8a3ccc695ac95665f2d003f35d36e99af\"" Apr 24 23:54:13.872096 containerd[2101]: time="2026-04-24T23:54:13.871989693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-136,Uid:0f9ce25d9fa880ccb4775a2154155a29,Namespace:kube-system,Attempt:0,} returns sandbox id \"9139ccc0734a14b22ef506a838f01d3118f1ecc03da83eef0682b153bf777dcb\"" Apr 24 23:54:13.874049 containerd[2101]: time="2026-04-24T23:54:13.874014589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-136,Uid:a41dd6e21b026467a8a3346cfe58cb46,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e068ced86d4063022b8f9411084f7b1c5a2ea5c97549c6363cbbb89e9bfc290\"" Apr 24 23:54:13.875255 containerd[2101]: time="2026-04-24T23:54:13.875224284Z" level=info msg="CreateContainer within sandbox \"b042d6bf21127297b2e4ca1975ea08d8a3ccc695ac95665f2d003f35d36e99af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:54:13.909369 kubelet[2996]: E0424 23:54:13.909322 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:54:13.911760 containerd[2101]: time="2026-04-24T23:54:13.911605418Z" level=info msg="CreateContainer within sandbox \"9139ccc0734a14b22ef506a838f01d3118f1ecc03da83eef0682b153bf777dcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:54:13.914704 containerd[2101]: time="2026-04-24T23:54:13.914660490Z" level=info msg="CreateContainer within sandbox \"0e068ced86d4063022b8f9411084f7b1c5a2ea5c97549c6363cbbb89e9bfc290\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:54:13.937581 kubelet[2996]: E0424 23:54:13.937444 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:54:13.984037 containerd[2101]: time="2026-04-24T23:54:13.983970268Z" level=info msg="CreateContainer within sandbox \"b042d6bf21127297b2e4ca1975ea08d8a3ccc695ac95665f2d003f35d36e99af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e5e21b85bc950c7b0e1ce683c2b4c454abcce467e7806cf3281fc6ba095fb75\"" Apr 24 23:54:13.985895 containerd[2101]: time="2026-04-24T23:54:13.985549327Z" level=info msg="CreateContainer within sandbox \"9139ccc0734a14b22ef506a838f01d3118f1ecc03da83eef0682b153bf777dcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16\"" Apr 24 23:54:13.985895 containerd[2101]: time="2026-04-24T23:54:13.985839841Z" level=info msg="StartContainer for \"4e5e21b85bc950c7b0e1ce683c2b4c454abcce467e7806cf3281fc6ba095fb75\"" Apr 24 23:54:13.989300 containerd[2101]: time="2026-04-24T23:54:13.988350928Z" level=info msg="CreateContainer within sandbox \"0e068ced86d4063022b8f9411084f7b1c5a2ea5c97549c6363cbbb89e9bfc290\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af\"" Apr 24 23:54:13.989300 containerd[2101]: time="2026-04-24T23:54:13.988575343Z" level=info msg="StartContainer for \"d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16\"" Apr 24 23:54:13.998455 containerd[2101]: time="2026-04-24T23:54:13.998415503Z" level=info msg="StartContainer for \"f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af\"" Apr 24 23:54:14.049355 kubelet[2996]: I0424 23:54:14.049317 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:14.049751 kubelet[2996]: E0424 23:54:14.049701 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:14.107340 containerd[2101]: time="2026-04-24T23:54:14.106572139Z" level=info msg="StartContainer for \"4e5e21b85bc950c7b0e1ce683c2b4c454abcce467e7806cf3281fc6ba095fb75\" returns successfully" Apr 24 23:54:14.157222 containerd[2101]: time="2026-04-24T23:54:14.157017499Z" level=info msg="StartContainer for \"f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af\" returns successfully" Apr 24 23:54:14.158666 containerd[2101]: time="2026-04-24T23:54:14.158548960Z" level=info msg="StartContainer for \"d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16\" returns successfully" Apr 24 23:54:14.511720 kubelet[2996]: E0424 23:54:14.511178 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:14.511720 kubelet[2996]: E0424 23:54:14.511567 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:14.517679 kubelet[2996]: E0424 23:54:14.517652 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:14.535383 kubelet[2996]: E0424 23:54:14.535345 2996 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:54:15.459159 kubelet[2996]: E0424 23:54:15.459081 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="3.2s" Apr 24 23:54:15.519016 kubelet[2996]: E0424 23:54:15.518753 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:15.519016 kubelet[2996]: E0424 23:54:15.518821 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:15.611961 kubelet[2996]: E0424 23:54:15.611911 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:54:15.652095 kubelet[2996]: I0424 23:54:15.652061 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:15.652502 kubelet[2996]: E0424 23:54:15.652435 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:16.129585 kubelet[2996]: E0424 23:54:16.129537 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:54:16.142383 kubelet[2996]: E0424 23:54:16.142335 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-136&limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:54:16.259883 kubelet[2996]: E0424 23:54:16.259819 2996 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:54:17.096769 kubelet[2996]: E0424 23:54:17.096731 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:18.602261 kubelet[2996]: E0424 23:54:18.602210 2996 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:54:18.659920 kubelet[2996]: E0424 23:54:18.659870 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-136?timeout=10s\": dial tcp 172.31.23.136:6443: connect: connection refused" interval="6.4s" Apr 24 23:54:18.854722 kubelet[2996]: I0424 23:54:18.854561 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:18.855176 kubelet[2996]: E0424 23:54:18.855136 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.136:6443/api/v1/nodes\": dial tcp 172.31.23.136:6443: connect: connection refused" node="ip-172-31-23-136" Apr 24 23:54:19.938368 update_engine[2084]: I20260424 23:54:19.938235 2084 update_attempter.cc:509] Updating boot flags... Apr 24 23:54:19.994314 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3291) Apr 24 23:54:20.179582 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3295) Apr 24 23:54:21.771321 kubelet[2996]: E0424 23:54:21.771252 2996 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-136" not found Apr 24 23:54:22.156683 kubelet[2996]: E0424 23:54:22.156636 2996 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-136" not found Apr 24 23:54:22.416910 kubelet[2996]: I0424 23:54:22.416652 2996 apiserver.go:52] "Watching apiserver" Apr 24 23:54:22.453989 kubelet[2996]: I0424 23:54:22.453941 2996 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:54:22.540806 kubelet[2996]: E0424 23:54:22.540647 2996 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-136\" not found" Apr 24 23:54:22.732878 kubelet[2996]: E0424 23:54:22.732708 2996 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-136" not found Apr 24 23:54:24.094031 kubelet[2996]: E0424 23:54:24.093740 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:24.436260 kubelet[2996]: E0424 23:54:24.436049 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:24.541334 kubelet[2996]: E0424 23:54:24.540497 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:24.541334 kubelet[2996]: E0424 23:54:24.541141 2996 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:24.998297 kubelet[2996]: E0424 23:54:24.998231 2996 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-136" not found Apr 24 23:54:25.065061 kubelet[2996]: E0424 23:54:25.065011 2996 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-136\" not found" node="ip-172-31-23-136" Apr 24 23:54:25.257626 kubelet[2996]: I0424 23:54:25.257173 2996 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:25.265661 kubelet[2996]: I0424 23:54:25.265602 2996 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-136" Apr 24 23:54:25.354364 kubelet[2996]: I0424 23:54:25.354316 2996 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:25.370561 kubelet[2996]: I0424 23:54:25.370264 2996 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:25.377635 kubelet[2996]: I0424 23:54:25.376852 2996 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:25.539656 kubelet[2996]: I0424 23:54:25.539207 2996 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:25.546686 kubelet[2996]: E0424 23:54:25.546640 2996 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:25.809957 systemd[1]: Reloading requested from client PID 3461 ('systemctl') (unit session-7.scope)... Apr 24 23:54:25.810459 systemd[1]: Reloading... Apr 24 23:54:25.933346 zram_generator::config[3501]: No configuration found. Apr 24 23:54:26.089802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:54:26.189690 systemd[1]: Reloading finished in 378 ms. Apr 24 23:54:26.232645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:54:26.246879 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:54:26.247328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:26.260951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:54:26.505102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:54:26.523052 (kubelet)[3571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:54:26.604809 kubelet[3571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:54:26.604809 kubelet[3571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:54:26.604809 kubelet[3571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:54:26.605341 kubelet[3571]: I0424 23:54:26.604936 3571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:54:26.618024 kubelet[3571]: I0424 23:54:26.617982 3571 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:54:26.618024 kubelet[3571]: I0424 23:54:26.618016 3571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:54:26.619301 kubelet[3571]: I0424 23:54:26.618359 3571 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:54:26.620463 kubelet[3571]: I0424 23:54:26.620438 3571 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:54:26.634965 kubelet[3571]: I0424 23:54:26.634919 3571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:54:26.645678 kubelet[3571]: E0424 23:54:26.645637 3571 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:54:26.645678 kubelet[3571]: I0424 23:54:26.645674 3571 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:54:26.650828 kubelet[3571]: I0424 23:54:26.650202 3571 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:54:26.651250 kubelet[3571]: I0424 23:54:26.651174 3571 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:54:26.651602 kubelet[3571]: I0424 23:54:26.651262 3571 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:54:26.651761 kubelet[3571]: I0424 23:54:26.651614 3571 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:54:26.651761 kubelet[3571]: I0424 23:54:26.651630 3571 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:54:26.651761 kubelet[3571]: I0424 23:54:26.651707 3571 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:54:26.651988 kubelet[3571]: I0424 23:54:26.651911 3571 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:54:26.651988 kubelet[3571]: I0424 23:54:26.651926 3571 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:54:26.653293 kubelet[3571]: I0424 23:54:26.652495 3571 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:54:26.653293 kubelet[3571]: I0424 23:54:26.652521 3571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:54:26.663114 kubelet[3571]: I0424 23:54:26.663090 3571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:54:26.664224 kubelet[3571]: I0424 23:54:26.664191 3571 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:54:26.689315 kubelet[3571]: I0424 23:54:26.688843 3571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:54:26.689315 kubelet[3571]: I0424 23:54:26.688886 3571 server.go:1289] "Started kubelet" Apr 24 23:54:26.691627 kubelet[3571]: I0424 23:54:26.690470 3571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:54:26.691864 kubelet[3571]: I0424 23:54:26.691732 3571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:54:26.692458 kubelet[3571]: I0424 23:54:26.692352 3571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:54:26.693926 kubelet[3571]: I0424 23:54:26.693790 3571 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:54:26.697299 kubelet[3571]: I0424 23:54:26.696955 3571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:54:26.701853 kubelet[3571]: I0424 23:54:26.701813 3571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:54:26.710671 kubelet[3571]: I0424 23:54:26.710613 3571 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:54:26.711032 kubelet[3571]: I0424 23:54:26.711017 3571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:54:26.711332 kubelet[3571]: I0424 23:54:26.711316 3571 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:54:26.712860 kubelet[3571]: I0424 23:54:26.712825 3571 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:54:26.713634 kubelet[3571]: I0424 23:54:26.713510 3571 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:54:26.724197 kubelet[3571]: I0424 23:54:26.722415 3571 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:54:26.724626 kubelet[3571]: I0424 23:54:26.724582 3571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:54:26.726140 kubelet[3571]: I0424 23:54:26.726110 3571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:54:26.726140 kubelet[3571]: I0424 23:54:26.726141 3571 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:54:26.726263 kubelet[3571]: I0424 23:54:26.726165 3571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:54:26.726263 kubelet[3571]: I0424 23:54:26.726175 3571 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:54:26.726263 kubelet[3571]: E0424 23:54:26.726214 3571 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:54:26.806921 kubelet[3571]: I0424 23:54:26.806786 3571 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:54:26.807101 kubelet[3571]: I0424 23:54:26.807084 3571 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:54:26.807205 kubelet[3571]: I0424 23:54:26.807197 3571 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:54:26.808827 kubelet[3571]: I0424 23:54:26.808809 3571 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:54:26.808967 kubelet[3571]: I0424 23:54:26.808940 3571 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:54:26.809039 kubelet[3571]: I0424 23:54:26.809031 3571 policy_none.go:49] "None policy: Start" Apr 24 23:54:26.809107 kubelet[3571]: I0424 23:54:26.809100 3571 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:54:26.809177 kubelet[3571]: I0424 23:54:26.809170 3571 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:54:26.809384 kubelet[3571]: I0424 23:54:26.809372 3571 state_mem.go:75] "Updated machine memory state" Apr 24 23:54:26.810754 kubelet[3571]: E0424 23:54:26.810737 3571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:54:26.812289 kubelet[3571]: I0424 23:54:26.812252 3571 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:54:26.812460 kubelet[3571]: I0424 23:54:26.812408 3571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:54:26.813573 kubelet[3571]: I0424 23:54:26.813560 3571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:54:26.815576 kubelet[3571]: E0424 23:54:26.815555 3571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:54:26.827142 kubelet[3571]: I0424 23:54:26.827098 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:26.827809 kubelet[3571]: I0424 23:54:26.827146 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:26.832718 kubelet[3571]: I0424 23:54:26.832690 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.839219 kubelet[3571]: E0424 23:54:26.839141 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-136\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:26.841006 kubelet[3571]: E0424 23:54:26.840964 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:26.842163 kubelet[3571]: E0424 23:54:26.841879 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-136\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915133 kubelet[3571]: I0424 23:54:26.915080 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-ca-certs\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:26.915133 kubelet[3571]: I0424 23:54:26.915131 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:26.915371 kubelet[3571]: I0424 23:54:26.915159 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915371 kubelet[3571]: I0424 23:54:26.915185 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915371 kubelet[3571]: I0424 23:54:26.915208 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915371 kubelet[3571]: I0424 23:54:26.915229 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915371 kubelet[3571]: I0424 23:54:26.915297 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f5d8cd872af89bd8e685ace8a0e357-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-136\" (UID: \"b4f5d8cd872af89bd8e685ace8a0e357\") " pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:26.915650 kubelet[3571]: I0424 23:54:26.915320 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a41dd6e21b026467a8a3346cfe58cb46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-136\" (UID: \"a41dd6e21b026467a8a3346cfe58cb46\") " pod="kube-system/kube-controller-manager-ip-172-31-23-136" Apr 24 23:54:26.915650 kubelet[3571]: I0424 23:54:26.915344 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f9ce25d9fa880ccb4775a2154155a29-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-136\" (UID: \"0f9ce25d9fa880ccb4775a2154155a29\") " pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:26.918681 kubelet[3571]: I0424 23:54:26.918644 3571 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-136" Apr 24 23:54:26.927392 kubelet[3571]: I0424 23:54:26.927360 3571 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-136" Apr 24 23:54:26.927576 kubelet[3571]: I0424 23:54:26.927454 3571 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-136" Apr 24 23:54:27.660778 kubelet[3571]: I0424 23:54:27.660730 3571 apiserver.go:52] "Watching apiserver" Apr 24 23:54:27.712259 kubelet[3571]: I0424 23:54:27.712211 3571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:54:27.759366 kubelet[3571]: I0424 23:54:27.759337 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:27.764625 kubelet[3571]: I0424 23:54:27.764436 3571 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:27.766797 kubelet[3571]: E0424 23:54:27.765358 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-136\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-136" Apr 24 23:54:27.770616 kubelet[3571]: E0424 23:54:27.770539 3571 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-136" Apr 24 23:54:27.796665 kubelet[3571]: I0424 23:54:27.796083 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-136" podStartSLOduration=2.79606348 podStartE2EDuration="2.79606348s" podCreationTimestamp="2026-04-24 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:54:27.785057161 +0000 UTC m=+1.253206023" watchObservedRunningTime="2026-04-24 23:54:27.79606348 +0000 UTC m=+1.264212338" Apr 24 23:54:27.796665 kubelet[3571]: I0424 23:54:27.796250 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-136" podStartSLOduration=2.796224513 podStartE2EDuration="2.796224513s" podCreationTimestamp="2026-04-24 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:54:27.796054191 +0000 UTC m=+1.264203054" watchObservedRunningTime="2026-04-24 23:54:27.796224513 +0000 UTC m=+1.264373380" Apr 24 23:54:27.818609 kubelet[3571]: I0424 23:54:27.818071 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-136" podStartSLOduration=2.818053944 podStartE2EDuration="2.818053944s" podCreationTimestamp="2026-04-24 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:54:27.808325183 +0000 UTC m=+1.276474042" watchObservedRunningTime="2026-04-24 23:54:27.818053944 +0000 UTC m=+1.286202805" Apr 24 23:54:30.255255 kubelet[3571]: I0424 23:54:30.255228 3571 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:54:30.257072 containerd[2101]: time="2026-04-24T23:54:30.256948647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:54:30.258351 kubelet[3571]: I0424 23:54:30.257261 3571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:54:34.366141 kubelet[3571]: I0424 23:54:34.366097 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db75f5ab-12d4-42e5-b009-6300ca2f8914-xtables-lock\") pod \"kube-proxy-9mvzm\" (UID: \"db75f5ab-12d4-42e5-b009-6300ca2f8914\") " pod="kube-system/kube-proxy-9mvzm" Apr 24 23:54:34.367783 kubelet[3571]: I0424 23:54:34.366898 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db75f5ab-12d4-42e5-b009-6300ca2f8914-lib-modules\") pod \"kube-proxy-9mvzm\" (UID: \"db75f5ab-12d4-42e5-b009-6300ca2f8914\") " pod="kube-system/kube-proxy-9mvzm" Apr 24 23:54:34.367783 kubelet[3571]: I0424 23:54:34.366940 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db75f5ab-12d4-42e5-b009-6300ca2f8914-kube-proxy\") pod \"kube-proxy-9mvzm\" (UID: \"db75f5ab-12d4-42e5-b009-6300ca2f8914\") " pod="kube-system/kube-proxy-9mvzm" Apr 24 23:54:34.367783 kubelet[3571]: I0424 23:54:34.366964 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dggg7\" (UniqueName: \"kubernetes.io/projected/db75f5ab-12d4-42e5-b009-6300ca2f8914-kube-api-access-dggg7\") pod \"kube-proxy-9mvzm\" (UID: \"db75f5ab-12d4-42e5-b009-6300ca2f8914\") " pod="kube-system/kube-proxy-9mvzm" Apr 24 23:54:34.467547 kubelet[3571]: I0424 23:54:34.467498 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcbe9d36-8fcc-4963-883f-7ca66157b356-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-bjs7x\" (UID: \"dcbe9d36-8fcc-4963-883f-7ca66157b356\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjs7x" Apr 24 23:54:34.467716 kubelet[3571]: I0424 23:54:34.467645 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccvc\" (UniqueName: \"kubernetes.io/projected/dcbe9d36-8fcc-4963-883f-7ca66157b356-kube-api-access-bccvc\") pod \"tigera-operator-6bf85f8dd-bjs7x\" (UID: \"dcbe9d36-8fcc-4963-883f-7ca66157b356\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjs7x" Apr 24 23:54:34.616867 containerd[2101]: time="2026-04-24T23:54:34.616753618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mvzm,Uid:db75f5ab-12d4-42e5-b009-6300ca2f8914,Namespace:kube-system,Attempt:0,}" Apr 24 23:54:34.650580 containerd[2101]: time="2026-04-24T23:54:34.650489650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:34.650746 containerd[2101]: time="2026-04-24T23:54:34.650603094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:34.650746 containerd[2101]: time="2026-04-24T23:54:34.650647084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:34.650845 containerd[2101]: time="2026-04-24T23:54:34.650782710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:34.676192 containerd[2101]: time="2026-04-24T23:54:34.676130635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjs7x,Uid:dcbe9d36-8fcc-4963-883f-7ca66157b356,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:54:34.717822 containerd[2101]: time="2026-04-24T23:54:34.717773633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mvzm,Uid:db75f5ab-12d4-42e5-b009-6300ca2f8914,Namespace:kube-system,Attempt:0,} returns sandbox id \"1641e76250416472e5c7e65e12a96afe2a3862d515516929899623e5a871bcb7\"" Apr 24 23:54:34.728898 containerd[2101]: time="2026-04-24T23:54:34.728607865Z" level=info msg="CreateContainer within sandbox \"1641e76250416472e5c7e65e12a96afe2a3862d515516929899623e5a871bcb7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:54:34.733141 containerd[2101]: time="2026-04-24T23:54:34.733013758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:34.733307 containerd[2101]: time="2026-04-24T23:54:34.733098388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:34.733307 containerd[2101]: time="2026-04-24T23:54:34.733165050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:34.734212 containerd[2101]: time="2026-04-24T23:54:34.734161696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:34.761311 containerd[2101]: time="2026-04-24T23:54:34.761073685Z" level=info msg="CreateContainer within sandbox \"1641e76250416472e5c7e65e12a96afe2a3862d515516929899623e5a871bcb7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2425d14a18a23cd0632f41df5b597f82047bff71a79aada75c5a39ab6b2f0dde\"" Apr 24 23:54:34.765839 containerd[2101]: time="2026-04-24T23:54:34.765698440Z" level=info msg="StartContainer for \"2425d14a18a23cd0632f41df5b597f82047bff71a79aada75c5a39ab6b2f0dde\"" Apr 24 23:54:34.836055 containerd[2101]: time="2026-04-24T23:54:34.836002021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjs7x,Uid:dcbe9d36-8fcc-4963-883f-7ca66157b356,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1e2df3d1a1a1b113974ffcd37f662f04045c79a74fbe0a514e9acc3250944ccb\"" Apr 24 23:54:34.840618 containerd[2101]: time="2026-04-24T23:54:34.840580603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:54:34.865748 containerd[2101]: time="2026-04-24T23:54:34.865622826Z" level=info msg="StartContainer for \"2425d14a18a23cd0632f41df5b597f82047bff71a79aada75c5a39ab6b2f0dde\" returns successfully" Apr 24 23:54:36.159078 kubelet[3571]: I0424 23:54:36.158999 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9mvzm" podStartSLOduration=6.158980265 podStartE2EDuration="6.158980265s" podCreationTimestamp="2026-04-24 23:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:54:35.7967828 +0000 UTC m=+9.264931676" watchObservedRunningTime="2026-04-24 23:54:36.158980265 +0000 UTC m=+9.627129144" Apr 24 23:54:36.355089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352459017.mount: Deactivated successfully. Apr 24 23:54:38.909408 containerd[2101]: time="2026-04-24T23:54:38.909352269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:38.911365 containerd[2101]: time="2026-04-24T23:54:38.911307741Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:54:38.913186 containerd[2101]: time="2026-04-24T23:54:38.913154396Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:38.922002 containerd[2101]: time="2026-04-24T23:54:38.921927805Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:38.923707 containerd[2101]: time="2026-04-24T23:54:38.922909237Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.082281119s" Apr 24 23:54:38.923707 containerd[2101]: time="2026-04-24T23:54:38.922954469Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:54:38.933001 containerd[2101]: time="2026-04-24T23:54:38.932934623Z" level=info msg="CreateContainer within sandbox \"1e2df3d1a1a1b113974ffcd37f662f04045c79a74fbe0a514e9acc3250944ccb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:54:38.977157 containerd[2101]: time="2026-04-24T23:54:38.976359456Z" level=info msg="CreateContainer within sandbox \"1e2df3d1a1a1b113974ffcd37f662f04045c79a74fbe0a514e9acc3250944ccb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae\"" Apr 24 23:54:38.978544 containerd[2101]: time="2026-04-24T23:54:38.978457130Z" level=info msg="StartContainer for \"c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae\"" Apr 24 23:54:39.080194 containerd[2101]: time="2026-04-24T23:54:39.079616327Z" level=info msg="StartContainer for \"c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae\" returns successfully" Apr 24 23:54:44.519159 sudo[2461]: pam_unix(sudo:session): session closed for user root Apr 24 23:54:44.680425 sshd[2457]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:44.697185 systemd[1]: sshd@6-172.31.23.136:22-4.175.71.9:48906.service: Deactivated successfully. Apr 24 23:54:44.704758 systemd-logind[2083]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:54:44.708956 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:54:44.712751 systemd-logind[2083]: Removed session 7. Apr 24 23:54:48.327292 kubelet[3571]: I0424 23:54:48.320561 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-bjs7x" podStartSLOduration=13.236400917 podStartE2EDuration="17.320540171s" podCreationTimestamp="2026-04-24 23:54:31 +0000 UTC" firstStartedPulling="2026-04-24 23:54:34.839876124 +0000 UTC m=+8.308025173" lastFinishedPulling="2026-04-24 23:54:38.924015585 +0000 UTC m=+12.392164427" observedRunningTime="2026-04-24 23:54:39.804594917 +0000 UTC m=+13.272743779" watchObservedRunningTime="2026-04-24 23:54:48.320540171 +0000 UTC m=+21.788689034" Apr 24 23:54:48.378880 kubelet[3571]: I0424 23:54:48.378715 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f1909de-97f6-4e94-ad1a-37e300101d7f-typha-certs\") pod \"calico-typha-5684746869-q2xq7\" (UID: \"9f1909de-97f6-4e94-ad1a-37e300101d7f\") " pod="calico-system/calico-typha-5684746869-q2xq7" Apr 24 23:54:48.378880 kubelet[3571]: I0424 23:54:48.378772 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqbc\" (UniqueName: \"kubernetes.io/projected/9f1909de-97f6-4e94-ad1a-37e300101d7f-kube-api-access-sfqbc\") pod \"calico-typha-5684746869-q2xq7\" (UID: \"9f1909de-97f6-4e94-ad1a-37e300101d7f\") " pod="calico-system/calico-typha-5684746869-q2xq7" Apr 24 23:54:48.378880 kubelet[3571]: I0424 23:54:48.378802 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1909de-97f6-4e94-ad1a-37e300101d7f-tigera-ca-bundle\") pod \"calico-typha-5684746869-q2xq7\" (UID: \"9f1909de-97f6-4e94-ad1a-37e300101d7f\") " pod="calico-system/calico-typha-5684746869-q2xq7" Apr 24 23:54:48.585304 kubelet[3571]: I0424 23:54:48.583313 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c6d730e0-53d2-4a29-b038-833cd19e555b-node-certs\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585304 kubelet[3571]: I0424 23:54:48.583375 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-var-lib-calico\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585304 kubelet[3571]: I0424 23:54:48.583398 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-sys-fs\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585304 kubelet[3571]: I0424 23:54:48.583428 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-flexvol-driver-host\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585304 kubelet[3571]: I0424 23:54:48.583451 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6d730e0-53d2-4a29-b038-833cd19e555b-tigera-ca-bundle\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585597 kubelet[3571]: I0424 23:54:48.583477 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-bpffs\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585597 kubelet[3571]: I0424 23:54:48.583499 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-nodeproc\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585597 kubelet[3571]: I0424 23:54:48.583519 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-policysync\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585597 kubelet[3571]: I0424 23:54:48.583540 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-lib-modules\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585597 kubelet[3571]: I0424 23:54:48.583563 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-cni-net-dir\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585794 kubelet[3571]: I0424 23:54:48.583587 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-xtables-lock\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585794 kubelet[3571]: I0424 23:54:48.583610 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-cni-bin-dir\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585794 kubelet[3571]: I0424 23:54:48.583631 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwvc\" (UniqueName: \"kubernetes.io/projected/c6d730e0-53d2-4a29-b038-833cd19e555b-kube-api-access-2pwvc\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585794 kubelet[3571]: I0424 23:54:48.583655 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-cni-log-dir\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.585794 kubelet[3571]: I0424 23:54:48.583677 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c6d730e0-53d2-4a29-b038-833cd19e555b-var-run-calico\") pod \"calico-node-bl6sq\" (UID: \"c6d730e0-53d2-4a29-b038-833cd19e555b\") " pod="calico-system/calico-node-bl6sq" Apr 24 23:54:48.611457 kubelet[3571]: E0424 23:54:48.610886 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:54:48.642618 containerd[2101]: time="2026-04-24T23:54:48.642567723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5684746869-q2xq7,Uid:9f1909de-97f6-4e94-ad1a-37e300101d7f,Namespace:calico-system,Attempt:0,}" Apr 24 23:54:48.685171 kubelet[3571]: I0424 23:54:48.685128 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/54f65b93-ac7d-4a34-935e-59195780993c-varrun\") pod \"csi-node-driver-clgnl\" (UID: \"54f65b93-ac7d-4a34-935e-59195780993c\") " pod="calico-system/csi-node-driver-clgnl" Apr 24 23:54:48.685386 kubelet[3571]: I0424 23:54:48.685202 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54f65b93-ac7d-4a34-935e-59195780993c-kubelet-dir\") pod \"csi-node-driver-clgnl\" (UID: \"54f65b93-ac7d-4a34-935e-59195780993c\") " pod="calico-system/csi-node-driver-clgnl" Apr 24 23:54:48.685386 kubelet[3571]: I0424 23:54:48.685290 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87wh\" (UniqueName: \"kubernetes.io/projected/54f65b93-ac7d-4a34-935e-59195780993c-kube-api-access-v87wh\") pod \"csi-node-driver-clgnl\" (UID: \"54f65b93-ac7d-4a34-935e-59195780993c\") " pod="calico-system/csi-node-driver-clgnl" Apr 24 23:54:48.685513 kubelet[3571]: I0424 23:54:48.685428 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/54f65b93-ac7d-4a34-935e-59195780993c-registration-dir\") pod \"csi-node-driver-clgnl\" (UID: \"54f65b93-ac7d-4a34-935e-59195780993c\") " pod="calico-system/csi-node-driver-clgnl" Apr 24 23:54:48.685513 kubelet[3571]: I0424 23:54:48.685454 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/54f65b93-ac7d-4a34-935e-59195780993c-socket-dir\") pod \"csi-node-driver-clgnl\" (UID: \"54f65b93-ac7d-4a34-935e-59195780993c\") " pod="calico-system/csi-node-driver-clgnl" Apr 24 23:54:48.696292 kubelet[3571]: E0424 23:54:48.695207 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.696292 kubelet[3571]: W0424 23:54:48.695235 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.696292 kubelet[3571]: E0424 23:54:48.695261 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.699838 kubelet[3571]: E0424 23:54:48.698788 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.699838 kubelet[3571]: W0424 23:54:48.698809 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.699838 kubelet[3571]: E0424 23:54:48.698829 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.699838 kubelet[3571]: E0424 23:54:48.699612 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.699838 kubelet[3571]: W0424 23:54:48.699791 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.699838 kubelet[3571]: E0424 23:54:48.699814 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.709666 kubelet[3571]: E0424 23:54:48.706507 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.709666 kubelet[3571]: W0424 23:54:48.706535 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.709666 kubelet[3571]: E0424 23:54:48.706560 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.709666 kubelet[3571]: E0424 23:54:48.707376 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.709666 kubelet[3571]: W0424 23:54:48.707392 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.709666 kubelet[3571]: E0424 23:54:48.707411 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.712129 kubelet[3571]: E0424 23:54:48.712097 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.712129 kubelet[3571]: W0424 23:54:48.712123 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.712335 kubelet[3571]: E0424 23:54:48.712146 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.712492 kubelet[3571]: E0424 23:54:48.712401 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.712492 kubelet[3571]: W0424 23:54:48.712419 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.712492 kubelet[3571]: E0424 23:54:48.712437 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.713665 kubelet[3571]: E0424 23:54:48.713339 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.713665 kubelet[3571]: W0424 23:54:48.713355 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.713665 kubelet[3571]: E0424 23:54:48.713371 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.715289 kubelet[3571]: E0424 23:54:48.714777 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.715289 kubelet[3571]: W0424 23:54:48.714793 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.715289 kubelet[3571]: E0424 23:54:48.714809 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.718704 kubelet[3571]: E0424 23:54:48.718471 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.718704 kubelet[3571]: W0424 23:54:48.718491 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.718704 kubelet[3571]: E0424 23:54:48.718510 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.723049 kubelet[3571]: E0424 23:54:48.721989 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.723049 kubelet[3571]: W0424 23:54:48.722008 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.723049 kubelet[3571]: E0424 23:54:48.722027 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.736110 containerd[2101]: time="2026-04-24T23:54:48.735998222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:48.736294 containerd[2101]: time="2026-04-24T23:54:48.736144821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:48.736294 containerd[2101]: time="2026-04-24T23:54:48.736184757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:48.737042 containerd[2101]: time="2026-04-24T23:54:48.737002618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:48.787410 kubelet[3571]: E0424 23:54:48.787369 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.787410 kubelet[3571]: W0424 23:54:48.787396 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.787605 kubelet[3571]: E0424 23:54:48.787422 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.787846 kubelet[3571]: E0424 23:54:48.787815 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.787846 kubelet[3571]: W0424 23:54:48.787836 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.787985 kubelet[3571]: E0424 23:54:48.787854 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.788363 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.794960 kubelet[3571]: W0424 23:54:48.788376 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.788397 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.788713 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.794960 kubelet[3571]: W0424 23:54:48.788722 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.788734 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.789108 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.794960 kubelet[3571]: W0424 23:54:48.789117 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.789127 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.794960 kubelet[3571]: E0424 23:54:48.789507 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795481 containerd[2101]: time="2026-04-24T23:54:48.792499645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bl6sq,Uid:c6d730e0-53d2-4a29-b038-833cd19e555b,Namespace:calico-system,Attempt:0,}" Apr 24 23:54:48.795544 kubelet[3571]: W0424 23:54:48.789515 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.789527 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.789786 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795544 kubelet[3571]: W0424 23:54:48.789795 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.789804 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.790087 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795544 kubelet[3571]: W0424 23:54:48.790095 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.790114 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795544 kubelet[3571]: E0424 23:54:48.791492 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795544 kubelet[3571]: W0424 23:54:48.791512 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.791549 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.792887 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795965 kubelet[3571]: W0424 23:54:48.792905 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.792920 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.793210 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795965 kubelet[3571]: W0424 23:54:48.793219 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.793229 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.793514 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.795965 kubelet[3571]: W0424 23:54:48.793522 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.795965 kubelet[3571]: E0424 23:54:48.793532 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.796487 kubelet[3571]: E0424 23:54:48.793919 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.796487 kubelet[3571]: W0424 23:54:48.793937 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.796487 kubelet[3571]: E0424 23:54:48.793955 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.796662 kubelet[3571]: E0424 23:54:48.796556 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.796662 kubelet[3571]: W0424 23:54:48.796569 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.796662 kubelet[3571]: E0424 23:54:48.796583 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.797018 kubelet[3571]: E0424 23:54:48.796868 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.797018 kubelet[3571]: W0424 23:54:48.796889 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.797018 kubelet[3571]: E0424 23:54:48.796903 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.797459 kubelet[3571]: E0424 23:54:48.797427 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.797459 kubelet[3571]: W0424 23:54:48.797447 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.798140 kubelet[3571]: E0424 23:54:48.797461 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.798140 kubelet[3571]: E0424 23:54:48.797749 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.798140 kubelet[3571]: W0424 23:54:48.797768 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.798140 kubelet[3571]: E0424 23:54:48.797788 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.798594 kubelet[3571]: E0424 23:54:48.798575 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.798594 kubelet[3571]: W0424 23:54:48.798594 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.798717 kubelet[3571]: E0424 23:54:48.798607 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.798857 kubelet[3571]: E0424 23:54:48.798825 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.798857 kubelet[3571]: W0424 23:54:48.798844 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.798964 kubelet[3571]: E0424 23:54:48.798858 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.799238 kubelet[3571]: E0424 23:54:48.799218 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.799238 kubelet[3571]: W0424 23:54:48.799237 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.799430 kubelet[3571]: E0424 23:54:48.799250 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.799664 kubelet[3571]: E0424 23:54:48.799630 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.799664 kubelet[3571]: W0424 23:54:48.799652 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.799770 kubelet[3571]: E0424 23:54:48.799667 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.799993 kubelet[3571]: E0424 23:54:48.799965 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.799993 kubelet[3571]: W0424 23:54:48.799985 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.800117 kubelet[3571]: E0424 23:54:48.799998 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.800795 kubelet[3571]: E0424 23:54:48.800764 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.800795 kubelet[3571]: W0424 23:54:48.800786 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.800977 kubelet[3571]: E0424 23:54:48.800800 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.801693 kubelet[3571]: E0424 23:54:48.801246 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.801693 kubelet[3571]: W0424 23:54:48.801255 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.801693 kubelet[3571]: E0424 23:54:48.801316 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.801693 kubelet[3571]: E0424 23:54:48.801639 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.801693 kubelet[3571]: W0424 23:54:48.801659 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.801693 kubelet[3571]: E0424 23:54:48.801672 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.815210 kubelet[3571]: E0424 23:54:48.815163 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:48.815210 kubelet[3571]: W0424 23:54:48.815200 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:48.815465 kubelet[3571]: E0424 23:54:48.815222 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:48.845783 containerd[2101]: time="2026-04-24T23:54:48.844973659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:54:48.852295 containerd[2101]: time="2026-04-24T23:54:48.845055235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:54:48.852295 containerd[2101]: time="2026-04-24T23:54:48.849904612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:48.852295 containerd[2101]: time="2026-04-24T23:54:48.850047206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:54:48.871577 containerd[2101]: time="2026-04-24T23:54:48.871435821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5684746869-q2xq7,Uid:9f1909de-97f6-4e94-ad1a-37e300101d7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8dca3b0ccc64cc55027d37e58e3313f89112bcd5f1ab843da7bb4dbd2ab8c77\"" Apr 24 23:54:48.874169 containerd[2101]: time="2026-04-24T23:54:48.874128740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:54:48.914596 containerd[2101]: time="2026-04-24T23:54:48.914534177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bl6sq,Uid:c6d730e0-53d2-4a29-b038-833cd19e555b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\"" Apr 24 23:54:50.384637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923083983.mount: Deactivated successfully. Apr 24 23:54:50.730795 kubelet[3571]: E0424 23:54:50.729861 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:54:51.674100 containerd[2101]: time="2026-04-24T23:54:51.674044726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:51.675611 containerd[2101]: time="2026-04-24T23:54:51.675550447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:54:51.676857 containerd[2101]: time="2026-04-24T23:54:51.676609769Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:51.679736 containerd[2101]: time="2026-04-24T23:54:51.679702314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:51.680608 containerd[2101]: time="2026-04-24T23:54:51.680577658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.806399521s" Apr 24 23:54:51.680737 containerd[2101]: time="2026-04-24T23:54:51.680718077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:54:51.682410 containerd[2101]: time="2026-04-24T23:54:51.682389042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:54:51.710530 containerd[2101]: time="2026-04-24T23:54:51.710483681Z" level=info msg="CreateContainer within sandbox \"e8dca3b0ccc64cc55027d37e58e3313f89112bcd5f1ab843da7bb4dbd2ab8c77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:54:51.734494 containerd[2101]: time="2026-04-24T23:54:51.734445189Z" level=info msg="CreateContainer within sandbox \"e8dca3b0ccc64cc55027d37e58e3313f89112bcd5f1ab843da7bb4dbd2ab8c77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"da40f9f3e1489326819a539c09e70e3df7f0ed678161a31368e50fe8b32267af\"" Apr 24 23:54:51.735286 containerd[2101]: time="2026-04-24T23:54:51.735244053Z" level=info msg="StartContainer for \"da40f9f3e1489326819a539c09e70e3df7f0ed678161a31368e50fe8b32267af\"" Apr 24 23:54:51.829106 containerd[2101]: time="2026-04-24T23:54:51.829045209Z" level=info msg="StartContainer for \"da40f9f3e1489326819a539c09e70e3df7f0ed678161a31368e50fe8b32267af\" returns successfully" Apr 24 23:54:51.888298 kubelet[3571]: E0424 23:54:51.887908 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.888298 kubelet[3571]: W0424 23:54:51.887929 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.888298 kubelet[3571]: E0424 23:54:51.887950 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.895285 kubelet[3571]: E0424 23:54:51.892942 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.895285 kubelet[3571]: W0424 23:54:51.893649 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.895285 kubelet[3571]: E0424 23:54:51.893686 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.895285 kubelet[3571]: E0424 23:54:51.894120 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.895285 kubelet[3571]: W0424 23:54:51.894132 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.895285 kubelet[3571]: E0424 23:54:51.894146 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.898028 kubelet[3571]: E0424 23:54:51.896911 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.898028 kubelet[3571]: W0424 23:54:51.897099 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.898028 kubelet[3571]: E0424 23:54:51.897120 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.898028 kubelet[3571]: E0424 23:54:51.898374 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.898028 kubelet[3571]: W0424 23:54:51.898385 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.898028 kubelet[3571]: E0424 23:54:51.898414 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.900393 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.902241 kubelet[3571]: W0424 23:54:51.900480 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.900499 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.901340 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.902241 kubelet[3571]: W0424 23:54:51.901353 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.901370 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.901730 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.902241 kubelet[3571]: W0424 23:54:51.901742 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.901755 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.902241 kubelet[3571]: E0424 23:54:51.902137 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.903444 kubelet[3571]: W0424 23:54:51.902148 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.902161 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.902574 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.903444 kubelet[3571]: W0424 23:54:51.902605 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.902617 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.902957 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.903444 kubelet[3571]: W0424 23:54:51.902968 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.903002 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.903444 kubelet[3571]: E0424 23:54:51.903426 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.903832 kubelet[3571]: W0424 23:54:51.903456 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.903832 kubelet[3571]: E0424 23:54:51.903470 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.903928 kubelet[3571]: E0424 23:54:51.903840 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.903928 kubelet[3571]: W0424 23:54:51.903875 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.903928 kubelet[3571]: E0424 23:54:51.903889 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.907296 kubelet[3571]: E0424 23:54:51.904194 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.907296 kubelet[3571]: W0424 23:54:51.904207 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.907296 kubelet[3571]: E0424 23:54:51.904219 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.907296 kubelet[3571]: E0424 23:54:51.904614 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.907296 kubelet[3571]: W0424 23:54:51.904625 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.907296 kubelet[3571]: E0424 23:54:51.904638 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.914282 kubelet[3571]: E0424 23:54:51.913473 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.914282 kubelet[3571]: W0424 23:54:51.914311 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.914282 kubelet[3571]: E0424 23:54:51.914348 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.915440 kubelet[3571]: E0424 23:54:51.915421 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.915695 kubelet[3571]: W0424 23:54:51.915557 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.915695 kubelet[3571]: E0424 23:54:51.915585 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.916050 kubelet[3571]: E0424 23:54:51.916007 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.916050 kubelet[3571]: W0424 23:54:51.916020 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.916050 kubelet[3571]: E0424 23:54:51.916034 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.916667 kubelet[3571]: E0424 23:54:51.916546 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.916667 kubelet[3571]: W0424 23:54:51.916563 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.916667 kubelet[3571]: E0424 23:54:51.916578 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.917228 kubelet[3571]: E0424 23:54:51.917000 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.917228 kubelet[3571]: W0424 23:54:51.917014 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.917228 kubelet[3571]: E0424 23:54:51.917028 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.917620 kubelet[3571]: E0424 23:54:51.917472 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.917620 kubelet[3571]: W0424 23:54:51.917486 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.917620 kubelet[3571]: E0424 23:54:51.917499 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.918107 kubelet[3571]: E0424 23:54:51.917973 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.918107 kubelet[3571]: W0424 23:54:51.917990 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.918107 kubelet[3571]: E0424 23:54:51.918003 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.918616 kubelet[3571]: E0424 23:54:51.918471 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.918616 kubelet[3571]: W0424 23:54:51.918484 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.918616 kubelet[3571]: E0424 23:54:51.918497 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.919034 kubelet[3571]: E0424 23:54:51.918931 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.919034 kubelet[3571]: W0424 23:54:51.918944 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.919034 kubelet[3571]: E0424 23:54:51.918957 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.919849 kubelet[3571]: E0424 23:54:51.919657 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.919849 kubelet[3571]: W0424 23:54:51.919670 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.919849 kubelet[3571]: E0424 23:54:51.919683 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.920193 kubelet[3571]: E0424 23:54:51.920059 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.920193 kubelet[3571]: W0424 23:54:51.920071 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.920193 kubelet[3571]: E0424 23:54:51.920085 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.921238 kubelet[3571]: E0424 23:54:51.920625 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.921238 kubelet[3571]: W0424 23:54:51.920639 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.921238 kubelet[3571]: E0424 23:54:51.920652 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.921681 kubelet[3571]: E0424 23:54:51.921669 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.921763 kubelet[3571]: W0424 23:54:51.921751 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.921850 kubelet[3571]: E0424 23:54:51.921838 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.922129 kubelet[3571]: E0424 23:54:51.922116 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.922218 kubelet[3571]: W0424 23:54:51.922206 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.922318 kubelet[3571]: E0424 23:54:51.922306 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.922631 kubelet[3571]: E0424 23:54:51.922619 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.922753 kubelet[3571]: W0424 23:54:51.922712 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.922753 kubelet[3571]: E0424 23:54:51.922728 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.923246 kubelet[3571]: E0424 23:54:51.923093 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.923246 kubelet[3571]: W0424 23:54:51.923107 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.923246 kubelet[3571]: E0424 23:54:51.923119 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.923805 kubelet[3571]: E0424 23:54:51.923619 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.923805 kubelet[3571]: W0424 23:54:51.923631 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.923805 kubelet[3571]: E0424 23:54:51.923644 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:51.926655 kubelet[3571]: E0424 23:54:51.926514 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:51.926655 kubelet[3571]: W0424 23:54:51.926595 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:51.926655 kubelet[3571]: E0424 23:54:51.926614 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.727563 kubelet[3571]: E0424 23:54:52.727069 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:54:52.873689 kubelet[3571]: I0424 23:54:52.873649 3571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:54:52.910850 kubelet[3571]: E0424 23:54:52.910808 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.910850 kubelet[3571]: W0424 23:54:52.910837 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.911521 kubelet[3571]: E0424 23:54:52.910861 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.911521 kubelet[3571]: E0424 23:54:52.911223 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.911521 kubelet[3571]: W0424 23:54:52.911234 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.911521 kubelet[3571]: E0424 23:54:52.911250 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.911521 kubelet[3571]: E0424 23:54:52.911516 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.911755 kubelet[3571]: W0424 23:54:52.911527 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.911755 kubelet[3571]: E0424 23:54:52.911539 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.911845 kubelet[3571]: E0424 23:54:52.911761 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.911845 kubelet[3571]: W0424 23:54:52.911771 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.911845 kubelet[3571]: E0424 23:54:52.911782 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.912027 kubelet[3571]: E0424 23:54:52.912002 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.912027 kubelet[3571]: W0424 23:54:52.912020 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.912157 kubelet[3571]: E0424 23:54:52.912032 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.912284 kubelet[3571]: E0424 23:54:52.912242 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.912284 kubelet[3571]: W0424 23:54:52.912279 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.912394 kubelet[3571]: E0424 23:54:52.912293 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.912541 kubelet[3571]: E0424 23:54:52.912521 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.912541 kubelet[3571]: W0424 23:54:52.912536 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.912657 kubelet[3571]: E0424 23:54:52.912547 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.912798 kubelet[3571]: E0424 23:54:52.912769 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.912798 kubelet[3571]: W0424 23:54:52.912791 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.912916 kubelet[3571]: E0424 23:54:52.912803 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.913082 kubelet[3571]: E0424 23:54:52.913055 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.913152 kubelet[3571]: W0424 23:54:52.913090 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.913152 kubelet[3571]: E0424 23:54:52.913104 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.913418 kubelet[3571]: E0424 23:54:52.913397 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.913508 kubelet[3571]: W0424 23:54:52.913419 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.913508 kubelet[3571]: E0424 23:54:52.913432 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.913686 kubelet[3571]: E0424 23:54:52.913633 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.913686 kubelet[3571]: W0424 23:54:52.913643 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.913686 kubelet[3571]: E0424 23:54:52.913655 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.913908 kubelet[3571]: E0424 23:54:52.913852 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.913908 kubelet[3571]: W0424 23:54:52.913861 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.913908 kubelet[3571]: E0424 23:54:52.913873 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.914105 kubelet[3571]: E0424 23:54:52.914085 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.914105 kubelet[3571]: W0424 23:54:52.914095 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.914258 kubelet[3571]: E0424 23:54:52.914106 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.914967 kubelet[3571]: E0424 23:54:52.914339 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.914967 kubelet[3571]: W0424 23:54:52.914349 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.914967 kubelet[3571]: E0424 23:54:52.914368 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.914967 kubelet[3571]: E0424 23:54:52.914622 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.914967 kubelet[3571]: W0424 23:54:52.914630 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.914967 kubelet[3571]: E0424 23:54:52.914639 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.924127 kubelet[3571]: E0424 23:54:52.924092 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.924127 kubelet[3571]: W0424 23:54:52.924118 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.924411 kubelet[3571]: E0424 23:54:52.924150 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.924564 kubelet[3571]: E0424 23:54:52.924537 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.924564 kubelet[3571]: W0424 23:54:52.924564 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.924819 kubelet[3571]: E0424 23:54:52.924578 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.924947 kubelet[3571]: E0424 23:54:52.924926 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.925003 kubelet[3571]: W0424 23:54:52.924946 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.925003 kubelet[3571]: E0424 23:54:52.924961 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.925675 kubelet[3571]: E0424 23:54:52.925655 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.925840 kubelet[3571]: W0424 23:54:52.925676 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.925840 kubelet[3571]: E0424 23:54:52.925702 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.926120 kubelet[3571]: E0424 23:54:52.925970 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.926120 kubelet[3571]: W0424 23:54:52.925982 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.926120 kubelet[3571]: E0424 23:54:52.926061 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.926927 kubelet[3571]: E0424 23:54:52.926389 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.926927 kubelet[3571]: W0424 23:54:52.926411 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.926927 kubelet[3571]: E0424 23:54:52.926426 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.926927 kubelet[3571]: E0424 23:54:52.926755 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.926927 kubelet[3571]: W0424 23:54:52.926765 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.926927 kubelet[3571]: E0424 23:54:52.926776 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.927329 kubelet[3571]: E0424 23:54:52.926981 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.927329 kubelet[3571]: W0424 23:54:52.926992 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.927329 kubelet[3571]: E0424 23:54:52.927004 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.927329 kubelet[3571]: E0424 23:54:52.927315 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.927329 kubelet[3571]: W0424 23:54:52.927326 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.927527 kubelet[3571]: E0424 23:54:52.927347 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.936206 kubelet[3571]: E0424 23:54:52.935203 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.936206 kubelet[3571]: W0424 23:54:52.935223 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.936206 kubelet[3571]: E0424 23:54:52.935243 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.936648 kubelet[3571]: E0424 23:54:52.936631 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.936730 kubelet[3571]: W0424 23:54:52.936717 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.936828 kubelet[3571]: E0424 23:54:52.936815 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.937195 kubelet[3571]: E0424 23:54:52.937148 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.937307 kubelet[3571]: W0424 23:54:52.937197 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.937307 kubelet[3571]: E0424 23:54:52.937215 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.938447 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.939694 kubelet[3571]: W0424 23:54:52.938467 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.938482 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.938907 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.939694 kubelet[3571]: W0424 23:54:52.938916 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.938928 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.939115 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.939694 kubelet[3571]: W0424 23:54:52.939123 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.939211 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.939694 kubelet[3571]: E0424 23:54:52.939434 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.941115 kubelet[3571]: W0424 23:54:52.939445 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.941115 kubelet[3571]: E0424 23:54:52.939456 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.941115 kubelet[3571]: E0424 23:54:52.939691 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.941115 kubelet[3571]: W0424 23:54:52.939701 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.941115 kubelet[3571]: E0424 23:54:52.939713 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:52.941115 kubelet[3571]: E0424 23:54:52.940085 3571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:54:52.941115 kubelet[3571]: W0424 23:54:52.940096 3571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:54:52.941115 kubelet[3571]: E0424 23:54:52.940108 3571 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:54:53.196555 containerd[2101]: time="2026-04-24T23:54:53.196499215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:53.197970 containerd[2101]: time="2026-04-24T23:54:53.197804686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:54:53.199374 containerd[2101]: time="2026-04-24T23:54:53.198968557Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:53.201918 containerd[2101]: time="2026-04-24T23:54:53.201883141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:54:53.202783 containerd[2101]: time="2026-04-24T23:54:53.202749284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.520224343s" Apr 24 23:54:53.202946 containerd[2101]: time="2026-04-24T23:54:53.202909356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:54:53.208599 containerd[2101]: time="2026-04-24T23:54:53.208547106Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:54:53.233290 containerd[2101]: time="2026-04-24T23:54:53.231796344Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f\"" Apr 24 23:54:53.233290 containerd[2101]: time="2026-04-24T23:54:53.232989238Z" level=info msg="StartContainer for \"912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f\"" Apr 24 23:54:53.303035 systemd[1]: run-containerd-runc-k8s.io-912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f-runc.IZmZdG.mount: Deactivated successfully. Apr 24 23:54:53.350487 containerd[2101]: time="2026-04-24T23:54:53.350439825Z" level=info msg="StartContainer for \"912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f\" returns successfully" Apr 24 23:54:53.484143 containerd[2101]: time="2026-04-24T23:54:53.471321250Z" level=info msg="shim disconnected" id=912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f namespace=k8s.io Apr 24 23:54:53.484143 containerd[2101]: time="2026-04-24T23:54:53.484061924Z" level=warning msg="cleaning up after shim disconnected" id=912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f namespace=k8s.io Apr 24 23:54:53.484143 containerd[2101]: time="2026-04-24T23:54:53.484084145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:54:53.693898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-912ae7ad5f2d5c1f91969ffb3a9e009b0eb6c7b5f8e2f9e85858c52020b4d15f-rootfs.mount: Deactivated successfully. Apr 24 23:54:53.881844 containerd[2101]: time="2026-04-24T23:54:53.880490781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:54:53.902336 kubelet[3571]: I0424 23:54:53.902239 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5684746869-q2xq7" podStartSLOduration=3.093819922 podStartE2EDuration="5.902214441s" podCreationTimestamp="2026-04-24 23:54:48 +0000 UTC" firstStartedPulling="2026-04-24 23:54:48.873282893 +0000 UTC m=+22.341431747" lastFinishedPulling="2026-04-24 23:54:51.681677414 +0000 UTC m=+25.149826266" observedRunningTime="2026-04-24 23:54:51.891234983 +0000 UTC m=+25.359383845" watchObservedRunningTime="2026-04-24 23:54:53.902214441 +0000 UTC m=+27.370363306" Apr 24 23:54:54.728423 kubelet[3571]: E0424 23:54:54.728197 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:54:56.728678 kubelet[3571]: E0424 23:54:56.728607 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:54:58.733025 kubelet[3571]: E0424 23:54:58.728355 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:00.728605 kubelet[3571]: E0424 23:55:00.728493 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:02.728939 kubelet[3571]: E0424 23:55:02.728716 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:03.637178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803910890.mount: Deactivated successfully. Apr 24 23:55:03.688315 containerd[2101]: time="2026-04-24T23:55:03.680694698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:03.691548 containerd[2101]: time="2026-04-24T23:55:03.682927109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:55:03.704374 containerd[2101]: time="2026-04-24T23:55:03.704312786Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:03.706677 containerd[2101]: time="2026-04-24T23:55:03.706627020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:03.707703 containerd[2101]: time="2026-04-24T23:55:03.707656351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.825550278s" Apr 24 23:55:03.707829 containerd[2101]: time="2026-04-24T23:55:03.707708909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:55:03.730164 containerd[2101]: time="2026-04-24T23:55:03.730052716Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:55:03.768159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431244462.mount: Deactivated successfully. Apr 24 23:55:03.770379 containerd[2101]: time="2026-04-24T23:55:03.770329363Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92\"" Apr 24 23:55:03.771342 containerd[2101]: time="2026-04-24T23:55:03.771309662Z" level=info msg="StartContainer for \"cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92\"" Apr 24 23:55:03.897427 containerd[2101]: time="2026-04-24T23:55:03.896296001Z" level=info msg="StartContainer for \"cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92\" returns successfully" Apr 24 23:55:04.008090 containerd[2101]: time="2026-04-24T23:55:04.006225562Z" level=info msg="shim disconnected" id=cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92 namespace=k8s.io Apr 24 23:55:04.008090 containerd[2101]: time="2026-04-24T23:55:04.006339210Z" level=warning msg="cleaning up after shim disconnected" id=cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92 namespace=k8s.io Apr 24 23:55:04.008090 containerd[2101]: time="2026-04-24T23:55:04.006353683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:55:04.025157 containerd[2101]: time="2026-04-24T23:55:04.025101727Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:55:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:55:04.637047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc2881ce8dd51762e5488f3cc8a15635fc11829cfe65bc600f119108eec6da92-rootfs.mount: Deactivated successfully. Apr 24 23:55:04.728621 kubelet[3571]: E0424 23:55:04.727591 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:04.920697 containerd[2101]: time="2026-04-24T23:55:04.920552504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:55:06.760331 kubelet[3571]: E0424 23:55:06.758385 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:08.730708 kubelet[3571]: E0424 23:55:08.730660 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:08.760873 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:08.758507 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:08.758592 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:08.825002 containerd[2101]: time="2026-04-24T23:55:08.824949095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:08.826484 containerd[2101]: time="2026-04-24T23:55:08.826253850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:55:08.829288 containerd[2101]: time="2026-04-24T23:55:08.827729024Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:08.830858 containerd[2101]: time="2026-04-24T23:55:08.830819191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:08.831869 containerd[2101]: time="2026-04-24T23:55:08.831833619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.911220375s" Apr 24 23:55:08.832005 containerd[2101]: time="2026-04-24T23:55:08.831983651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:55:08.837803 containerd[2101]: time="2026-04-24T23:55:08.837755942Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:55:08.892839 containerd[2101]: time="2026-04-24T23:55:08.892777521Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee\"" Apr 24 23:55:08.896722 containerd[2101]: time="2026-04-24T23:55:08.895118961Z" level=info msg="StartContainer for \"f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee\"" Apr 24 23:55:08.950260 systemd[1]: run-containerd-runc-k8s.io-f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee-runc.NvEzt9.mount: Deactivated successfully. Apr 24 23:55:08.988952 containerd[2101]: time="2026-04-24T23:55:08.988196196Z" level=info msg="StartContainer for \"f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee\" returns successfully" Apr 24 23:55:10.058622 containerd[2101]: time="2026-04-24T23:55:10.058399173Z" level=info msg="shim disconnected" id=f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee namespace=k8s.io Apr 24 23:55:10.058622 containerd[2101]: time="2026-04-24T23:55:10.058472100Z" level=warning msg="cleaning up after shim disconnected" id=f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee namespace=k8s.io Apr 24 23:55:10.058622 containerd[2101]: time="2026-04-24T23:55:10.058484064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:55:10.059934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f26e7666c069f6355b9e804c6f92770ec52e880afe6e3a351747e4277e1ac4ee-rootfs.mount: Deactivated successfully. Apr 24 23:55:10.095845 kubelet[3571]: I0424 23:55:10.089721 3571 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:55:10.424815 kubelet[3571]: I0424 23:55:10.424735 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3d5ba2-1e47-4166-82c1-9c12137f6661-config-volume\") pod \"coredns-674b8bbfcf-6dpzw\" (UID: \"ed3d5ba2-1e47-4166-82c1-9c12137f6661\") " pod="kube-system/coredns-674b8bbfcf-6dpzw" Apr 24 23:55:10.425186 kubelet[3571]: I0424 23:55:10.425027 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tzc4\" (UniqueName: \"kubernetes.io/projected/527ec70e-1bb6-4d30-8070-55e2af7c2275-kube-api-access-9tzc4\") pod \"coredns-674b8bbfcf-4c8vb\" (UID: \"527ec70e-1bb6-4d30-8070-55e2af7c2275\") " pod="kube-system/coredns-674b8bbfcf-4c8vb" Apr 24 23:55:10.425186 kubelet[3571]: I0424 23:55:10.425116 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss66f\" (UniqueName: \"kubernetes.io/projected/481e03e3-267c-445f-b620-060c178d7beb-kube-api-access-ss66f\") pod \"calico-apiserver-65f886c557-5hqq5\" (UID: \"481e03e3-267c-445f-b620-060c178d7beb\") " pod="calico-system/calico-apiserver-65f886c557-5hqq5" Apr 24 23:55:10.425637 kubelet[3571]: I0424 23:55:10.425494 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/857384f3-d2e7-446d-9732-a43f27f17e84-goldmane-key-pair\") pod \"goldmane-5b85766d88-5rfzw\" (UID: \"857384f3-d2e7-446d-9732-a43f27f17e84\") " pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:10.425637 kubelet[3571]: I0424 23:55:10.425528 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-ca-bundle\") pod \"whisker-7858979b86-2k95n\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:10.425942 kubelet[3571]: I0424 23:55:10.425816 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8pd\" (UniqueName: \"kubernetes.io/projected/ed3d5ba2-1e47-4166-82c1-9c12137f6661-kube-api-access-qd8pd\") pod \"coredns-674b8bbfcf-6dpzw\" (UID: \"ed3d5ba2-1e47-4166-82c1-9c12137f6661\") " pod="kube-system/coredns-674b8bbfcf-6dpzw" Apr 24 23:55:10.426120 kubelet[3571]: I0424 23:55:10.426063 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-nginx-config\") pod \"whisker-7858979b86-2k95n\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:10.426395 kubelet[3571]: I0424 23:55:10.426342 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vf6\" (UniqueName: \"kubernetes.io/projected/9c70e107-88ef-4f1b-bea2-7693185d0306-kube-api-access-g9vf6\") pod \"calico-apiserver-65f886c557-mclkn\" (UID: \"9c70e107-88ef-4f1b-bea2-7693185d0306\") " pod="calico-system/calico-apiserver-65f886c557-mclkn" Apr 24 23:55:10.426612 kubelet[3571]: I0424 23:55:10.426378 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwcq\" (UniqueName: \"kubernetes.io/projected/40571a1b-e447-4152-bb90-dfc32e8c5e7a-kube-api-access-rpwcq\") pod \"whisker-7858979b86-2k95n\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:10.426775 kubelet[3571]: I0424 23:55:10.426586 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/481e03e3-267c-445f-b620-060c178d7beb-calico-apiserver-certs\") pod \"calico-apiserver-65f886c557-5hqq5\" (UID: \"481e03e3-267c-445f-b620-060c178d7beb\") " pod="calico-system/calico-apiserver-65f886c557-5hqq5" Apr 24 23:55:10.426775 kubelet[3571]: I0424 23:55:10.426722 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/857384f3-d2e7-446d-9732-a43f27f17e84-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-5rfzw\" (UID: \"857384f3-d2e7-446d-9732-a43f27f17e84\") " pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:10.426974 kubelet[3571]: I0424 23:55:10.426746 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-backend-key-pair\") pod \"whisker-7858979b86-2k95n\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:10.427204 kubelet[3571]: I0424 23:55:10.427153 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6-tigera-ca-bundle\") pod \"calico-kube-controllers-5597f658fb-6hcjb\" (UID: \"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6\") " pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" Apr 24 23:55:10.427363 kubelet[3571]: I0424 23:55:10.427184 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jk4c\" (UniqueName: \"kubernetes.io/projected/6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6-kube-api-access-4jk4c\") pod \"calico-kube-controllers-5597f658fb-6hcjb\" (UID: \"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6\") " pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" Apr 24 23:55:10.427363 kubelet[3571]: I0424 23:55:10.427317 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8pl\" (UniqueName: \"kubernetes.io/projected/857384f3-d2e7-446d-9732-a43f27f17e84-kube-api-access-zt8pl\") pod \"goldmane-5b85766d88-5rfzw\" (UID: \"857384f3-d2e7-446d-9732-a43f27f17e84\") " pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:10.427552 kubelet[3571]: I0424 23:55:10.427482 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c70e107-88ef-4f1b-bea2-7693185d0306-calico-apiserver-certs\") pod \"calico-apiserver-65f886c557-mclkn\" (UID: \"9c70e107-88ef-4f1b-bea2-7693185d0306\") " pod="calico-system/calico-apiserver-65f886c557-mclkn" Apr 24 23:55:10.427552 kubelet[3571]: I0424 23:55:10.427514 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/527ec70e-1bb6-4d30-8070-55e2af7c2275-config-volume\") pod \"coredns-674b8bbfcf-4c8vb\" (UID: \"527ec70e-1bb6-4d30-8070-55e2af7c2275\") " pod="kube-system/coredns-674b8bbfcf-4c8vb" Apr 24 23:55:10.427840 kubelet[3571]: I0424 23:55:10.427694 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/857384f3-d2e7-446d-9732-a43f27f17e84-config\") pod \"goldmane-5b85766d88-5rfzw\" (UID: \"857384f3-d2e7-446d-9732-a43f27f17e84\") " pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:10.677137 containerd[2101]: time="2026-04-24T23:55:10.677006982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5597f658fb-6hcjb,Uid:6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:10.681888 containerd[2101]: time="2026-04-24T23:55:10.681841530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dpzw,Uid:ed3d5ba2-1e47-4166-82c1-9c12137f6661,Namespace:kube-system,Attempt:0,}" Apr 24 23:55:10.688028 containerd[2101]: time="2026-04-24T23:55:10.687984591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4c8vb,Uid:527ec70e-1bb6-4d30-8070-55e2af7c2275,Namespace:kube-system,Attempt:0,}" Apr 24 23:55:10.689196 containerd[2101]: time="2026-04-24T23:55:10.689161492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-5hqq5,Uid:481e03e3-267c-445f-b620-060c178d7beb,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:10.692435 containerd[2101]: time="2026-04-24T23:55:10.692399843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5rfzw,Uid:857384f3-d2e7-446d-9732-a43f27f17e84,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:10.731961 containerd[2101]: time="2026-04-24T23:55:10.731850087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7858979b86-2k95n,Uid:40571a1b-e447-4152-bb90-dfc32e8c5e7a,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:10.732627 containerd[2101]: time="2026-04-24T23:55:10.732597136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-mclkn,Uid:9c70e107-88ef-4f1b-bea2-7693185d0306,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:10.737672 containerd[2101]: time="2026-04-24T23:55:10.737449805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clgnl,Uid:54f65b93-ac7d-4a34-935e-59195780993c,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:11.004320 containerd[2101]: time="2026-04-24T23:55:11.004199015Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:55:11.044621 containerd[2101]: time="2026-04-24T23:55:11.044555105Z" level=info msg="CreateContainer within sandbox \"8ca39861329cce6c405accf80a88b7c953651f94d4f5f5bc305031cd4226ab54\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb\"" Apr 24 23:55:11.046648 containerd[2101]: time="2026-04-24T23:55:11.046003872Z" level=info msg="StartContainer for \"48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb\"" Apr 24 23:55:11.245564 containerd[2101]: time="2026-04-24T23:55:11.243103824Z" level=info msg="StartContainer for \"48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb\" returns successfully" Apr 24 23:55:11.566794 containerd[2101]: time="2026-04-24T23:55:11.566636222Z" level=error msg="Failed to destroy network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.566794 containerd[2101]: time="2026-04-24T23:55:11.566698648Z" level=error msg="Failed to destroy network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.575984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e-shm.mount: Deactivated successfully. Apr 24 23:55:11.576930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f-shm.mount: Deactivated successfully. Apr 24 23:55:11.583479 containerd[2101]: time="2026-04-24T23:55:11.583004375Z" level=error msg="encountered an error cleaning up failed sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.597562 containerd[2101]: time="2026-04-24T23:55:11.566654135Z" level=error msg="Failed to destroy network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.598081 containerd[2101]: time="2026-04-24T23:55:11.566659821Z" level=error msg="Failed to destroy network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.599742 containerd[2101]: time="2026-04-24T23:55:11.599697941Z" level=error msg="encountered an error cleaning up failed sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.599916 containerd[2101]: time="2026-04-24T23:55:11.599783556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clgnl,Uid:54f65b93-ac7d-4a34-935e-59195780993c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.599984 containerd[2101]: time="2026-04-24T23:55:11.598188865Z" level=error msg="Failed to destroy network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.608619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8-shm.mount: Deactivated successfully. Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.598199421Z" level=error msg="encountered an error cleaning up failed sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.619439341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-mclkn,Uid:9c70e107-88ef-4f1b-bea2-7693185d0306,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.619720512Z" level=error msg="encountered an error cleaning up failed sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.619764550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4c8vb,Uid:527ec70e-1bb6-4d30-8070-55e2af7c2275,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.620039416Z" level=error msg="encountered an error cleaning up failed sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.620206 containerd[2101]: time="2026-04-24T23:55:11.620082292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5597f658fb-6hcjb,Uid:6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.621367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d-shm.mount: Deactivated successfully. Apr 24 23:55:11.621593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c-shm.mount: Deactivated successfully. Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.622433845Z" level=error msg="Failed to destroy network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.622577638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7858979b86-2k95n,Uid:40571a1b-e447-4152-bb90-dfc32e8c5e7a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.622940368Z" level=error msg="encountered an error cleaning up failed sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.622993914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-5hqq5,Uid:481e03e3-267c-445f-b620-060c178d7beb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627190172Z" level=error msg="Failed to destroy network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627192002Z" level=error msg="Failed to destroy network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627614954Z" level=error msg="encountered an error cleaning up failed sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627678701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dpzw,Uid:ed3d5ba2-1e47-4166-82c1-9c12137f6661,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627630452Z" level=error msg="encountered an error cleaning up failed sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629084 containerd[2101]: time="2026-04-24T23:55:11.627770120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5rfzw,Uid:857384f3-d2e7-446d-9732-a43f27f17e84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629720 kubelet[3571]: E0424 23:55:11.625181 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629720 kubelet[3571]: E0424 23:55:11.626109 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.629720 kubelet[3571]: E0424 23:55:11.626957 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65f886c557-5hqq5" Apr 24 23:55:11.629720 kubelet[3571]: E0424 23:55:11.628785 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65f886c557-5hqq5" Apr 24 23:55:11.630314 kubelet[3571]: E0424 23:55:11.628841 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f886c557-5hqq5_calico-system(481e03e3-267c-445f-b620-060c178d7beb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f886c557-5hqq5_calico-system(481e03e3-267c-445f-b620-060c178d7beb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65f886c557-5hqq5" podUID="481e03e3-267c-445f-b620-060c178d7beb" Apr 24 23:55:11.630314 kubelet[3571]: E0424 23:55:11.627045 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4c8vb" Apr 24 23:55:11.630314 kubelet[3571]: E0424 23:55:11.629873 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4c8vb" Apr 24 23:55:11.631085 kubelet[3571]: E0424 23:55:11.629933 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4c8vb_kube-system(527ec70e-1bb6-4d30-8070-55e2af7c2275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4c8vb_kube-system(527ec70e-1bb6-4d30-8070-55e2af7c2275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4c8vb" podUID="527ec70e-1bb6-4d30-8070-55e2af7c2275" Apr 24 23:55:11.631085 kubelet[3571]: E0424 23:55:11.630071 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.631085 kubelet[3571]: E0424 23:55:11.630102 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clgnl" Apr 24 23:55:11.631254 kubelet[3571]: E0424 23:55:11.630123 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clgnl" Apr 24 23:55:11.631254 kubelet[3571]: E0424 23:55:11.630163 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-clgnl_calico-system(54f65b93-ac7d-4a34-935e-59195780993c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-clgnl_calico-system(54f65b93-ac7d-4a34-935e-59195780993c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:11.631254 kubelet[3571]: E0424 23:55:11.630206 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.631537 kubelet[3571]: E0424 23:55:11.630230 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65f886c557-mclkn" Apr 24 23:55:11.631537 kubelet[3571]: E0424 23:55:11.630248 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-65f886c557-mclkn" Apr 24 23:55:11.631537 kubelet[3571]: E0424 23:55:11.631355 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.631537 kubelet[3571]: E0424 23:55:11.631412 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" Apr 24 23:55:11.631741 kubelet[3571]: E0424 23:55:11.631438 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" Apr 24 23:55:11.631741 kubelet[3571]: E0424 23:55:11.631485 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5597f658fb-6hcjb_calico-system(6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5597f658fb-6hcjb_calico-system(6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" podUID="6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6" Apr 24 23:55:11.631741 kubelet[3571]: E0424 23:55:11.631528 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.631944 kubelet[3571]: E0424 23:55:11.631556 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:11.631944 kubelet[3571]: E0424 23:55:11.631576 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7858979b86-2k95n" Apr 24 23:55:11.631944 kubelet[3571]: E0424 23:55:11.631615 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7858979b86-2k95n_calico-system(40571a1b-e447-4152-bb90-dfc32e8c5e7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7858979b86-2k95n_calico-system(40571a1b-e447-4152-bb90-dfc32e8c5e7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7858979b86-2k95n" podUID="40571a1b-e447-4152-bb90-dfc32e8c5e7a" Apr 24 23:55:11.632189 kubelet[3571]: E0424 23:55:11.631799 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f886c557-mclkn_calico-system(9c70e107-88ef-4f1b-bea2-7693185d0306)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f886c557-mclkn_calico-system(9c70e107-88ef-4f1b-bea2-7693185d0306)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65f886c557-mclkn" podUID="9c70e107-88ef-4f1b-bea2-7693185d0306" Apr 24 23:55:11.632189 kubelet[3571]: E0424 23:55:11.631915 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.632189 kubelet[3571]: E0424 23:55:11.631947 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:11.632407 kubelet[3571]: E0424 23:55:11.631967 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-5rfzw" Apr 24 23:55:11.632407 kubelet[3571]: E0424 23:55:11.632006 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-5rfzw_calico-system(857384f3-d2e7-446d-9732-a43f27f17e84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-5rfzw_calico-system(857384f3-d2e7-446d-9732-a43f27f17e84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-5rfzw" podUID="857384f3-d2e7-446d-9732-a43f27f17e84" Apr 24 23:55:11.632407 kubelet[3571]: E0424 23:55:11.632040 3571 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:11.632556 kubelet[3571]: E0424 23:55:11.632062 3571 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6dpzw" Apr 24 23:55:11.632556 kubelet[3571]: E0424 23:55:11.632081 3571 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6dpzw" Apr 24 23:55:11.632556 kubelet[3571]: E0424 23:55:11.632119 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6dpzw_kube-system(ed3d5ba2-1e47-4166-82c1-9c12137f6661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6dpzw_kube-system(ed3d5ba2-1e47-4166-82c1-9c12137f6661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6dpzw" podUID="ed3d5ba2-1e47-4166-82c1-9c12137f6661" Apr 24 23:55:11.986971 kubelet[3571]: I0424 23:55:11.986793 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:12.040011 kubelet[3571]: I0424 23:55:12.039511 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:12.057951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599-shm.mount: Deactivated successfully. Apr 24 23:55:12.058157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1-shm.mount: Deactivated successfully. Apr 24 23:55:12.058340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3-shm.mount: Deactivated successfully. Apr 24 23:55:12.063005 kubelet[3571]: I0424 23:55:12.050417 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bl6sq" podStartSLOduration=4.133593362 podStartE2EDuration="24.05039621s" podCreationTimestamp="2026-04-24 23:54:48 +0000 UTC" firstStartedPulling="2026-04-24 23:54:48.91604078 +0000 UTC m=+22.384189622" lastFinishedPulling="2026-04-24 23:55:08.832843612 +0000 UTC m=+42.300992470" observedRunningTime="2026-04-24 23:55:12.049492662 +0000 UTC m=+45.517641523" watchObservedRunningTime="2026-04-24 23:55:12.05039621 +0000 UTC m=+45.518545071" Apr 24 23:55:12.069379 containerd[2101]: time="2026-04-24T23:55:12.068541744Z" level=info msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" Apr 24 23:55:12.072352 containerd[2101]: time="2026-04-24T23:55:12.070380392Z" level=info msg="Ensure that sandbox 189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c in task-service has been cleanup successfully" Apr 24 23:55:12.089410 containerd[2101]: time="2026-04-24T23:55:12.089357789Z" level=info msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" Apr 24 23:55:12.095332 kubelet[3571]: I0424 23:55:12.093619 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:12.095575 containerd[2101]: time="2026-04-24T23:55:12.095543513Z" level=info msg="Ensure that sandbox 9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d in task-service has been cleanup successfully" Apr 24 23:55:12.097575 containerd[2101]: time="2026-04-24T23:55:12.097537424Z" level=info msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" Apr 24 23:55:12.098136 containerd[2101]: time="2026-04-24T23:55:12.098110539Z" level=info msg="Ensure that sandbox 5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e in task-service has been cleanup successfully" Apr 24 23:55:12.105631 kubelet[3571]: I0424 23:55:12.104980 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:12.113142 containerd[2101]: time="2026-04-24T23:55:12.113092581Z" level=info msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" Apr 24 23:55:12.113375 containerd[2101]: time="2026-04-24T23:55:12.113335398Z" level=info msg="Ensure that sandbox f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1 in task-service has been cleanup successfully" Apr 24 23:55:12.117329 kubelet[3571]: I0424 23:55:12.117264 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:12.119119 containerd[2101]: time="2026-04-24T23:55:12.118992579Z" level=info msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" Apr 24 23:55:12.119574 containerd[2101]: time="2026-04-24T23:55:12.119525030Z" level=info msg="Ensure that sandbox 3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f in task-service has been cleanup successfully" Apr 24 23:55:12.128544 kubelet[3571]: I0424 23:55:12.128509 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:12.130631 containerd[2101]: time="2026-04-24T23:55:12.130003636Z" level=info msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" Apr 24 23:55:12.130631 containerd[2101]: time="2026-04-24T23:55:12.130262057Z" level=info msg="Ensure that sandbox 49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3 in task-service has been cleanup successfully" Apr 24 23:55:12.137831 kubelet[3571]: I0424 23:55:12.137795 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:12.141609 containerd[2101]: time="2026-04-24T23:55:12.141558613Z" level=info msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" Apr 24 23:55:12.141826 containerd[2101]: time="2026-04-24T23:55:12.141779974Z" level=info msg="Ensure that sandbox 05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599 in task-service has been cleanup successfully" Apr 24 23:55:12.148113 kubelet[3571]: I0424 23:55:12.147455 3571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:12.150601 containerd[2101]: time="2026-04-24T23:55:12.150448446Z" level=info msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" Apr 24 23:55:12.162475 containerd[2101]: time="2026-04-24T23:55:12.162419067Z" level=info msg="Ensure that sandbox 4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8 in task-service has been cleanup successfully" Apr 24 23:55:12.259675 containerd[2101]: time="2026-04-24T23:55:12.259471134Z" level=error msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" failed" error="failed to destroy network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.261862 kubelet[3571]: E0424 23:55:12.261217 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:12.263357 kubelet[3571]: E0424 23:55:12.262039 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e"} Apr 24 23:55:12.263579 kubelet[3571]: E0424 23:55:12.263500 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.263579 kubelet[3571]: E0424 23:55:12.263539 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7858979b86-2k95n" podUID="40571a1b-e447-4152-bb90-dfc32e8c5e7a" Apr 24 23:55:12.293005 containerd[2101]: time="2026-04-24T23:55:12.292811414Z" level=error msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" failed" error="failed to destroy network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.294080 kubelet[3571]: E0424 23:55:12.293484 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:12.294080 kubelet[3571]: E0424 23:55:12.293687 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1"} Apr 24 23:55:12.294080 kubelet[3571]: E0424 23:55:12.293735 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed3d5ba2-1e47-4166-82c1-9c12137f6661\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.294080 kubelet[3571]: E0424 23:55:12.293769 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed3d5ba2-1e47-4166-82c1-9c12137f6661\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6dpzw" podUID="ed3d5ba2-1e47-4166-82c1-9c12137f6661" Apr 24 23:55:12.319785 containerd[2101]: time="2026-04-24T23:55:12.319637832Z" level=error msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" failed" error="failed to destroy network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.320662 kubelet[3571]: E0424 23:55:12.320466 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:12.320662 kubelet[3571]: E0424 23:55:12.320527 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f"} Apr 24 23:55:12.320662 kubelet[3571]: E0424 23:55:12.320582 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c70e107-88ef-4f1b-bea2-7693185d0306\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.320662 kubelet[3571]: E0424 23:55:12.320619 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c70e107-88ef-4f1b-bea2-7693185d0306\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65f886c557-mclkn" podUID="9c70e107-88ef-4f1b-bea2-7693185d0306" Apr 24 23:55:12.336040 containerd[2101]: time="2026-04-24T23:55:12.335406896Z" level=error msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" failed" error="failed to destroy network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.336221 kubelet[3571]: E0424 23:55:12.335797 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:12.336221 kubelet[3571]: E0424 23:55:12.335854 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8"} Apr 24 23:55:12.336221 kubelet[3571]: E0424 23:55:12.335894 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"527ec70e-1bb6-4d30-8070-55e2af7c2275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.336221 kubelet[3571]: E0424 23:55:12.335930 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"527ec70e-1bb6-4d30-8070-55e2af7c2275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4c8vb" podUID="527ec70e-1bb6-4d30-8070-55e2af7c2275" Apr 24 23:55:12.354232 containerd[2101]: time="2026-04-24T23:55:12.354176934Z" level=error msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" failed" error="failed to destroy network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.354819 kubelet[3571]: E0424 23:55:12.354620 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:12.354819 kubelet[3571]: E0424 23:55:12.354681 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c"} Apr 24 23:55:12.354819 kubelet[3571]: E0424 23:55:12.354723 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.354819 kubelet[3571]: E0424 23:55:12.354761 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" podUID="6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6" Apr 24 23:55:12.360596 containerd[2101]: time="2026-04-24T23:55:12.360549702Z" level=error msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" failed" error="failed to destroy network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.360884 kubelet[3571]: E0424 23:55:12.360829 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:12.361004 kubelet[3571]: E0424 23:55:12.360903 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3"} Apr 24 23:55:12.361004 kubelet[3571]: E0424 23:55:12.360943 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"481e03e3-267c-445f-b620-060c178d7beb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.361004 kubelet[3571]: E0424 23:55:12.360979 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"481e03e3-267c-445f-b620-060c178d7beb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-65f886c557-5hqq5" podUID="481e03e3-267c-445f-b620-060c178d7beb" Apr 24 23:55:12.362975 containerd[2101]: time="2026-04-24T23:55:12.362911130Z" level=error msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" failed" error="failed to destroy network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.363405 kubelet[3571]: E0424 23:55:12.363137 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:12.363405 kubelet[3571]: E0424 23:55:12.363184 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d"} Apr 24 23:55:12.363405 kubelet[3571]: E0424 23:55:12.363222 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54f65b93-ac7d-4a34-935e-59195780993c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.363405 kubelet[3571]: E0424 23:55:12.363253 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54f65b93-ac7d-4a34-935e-59195780993c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clgnl" podUID="54f65b93-ac7d-4a34-935e-59195780993c" Apr 24 23:55:12.368440 containerd[2101]: time="2026-04-24T23:55:12.368384774Z" level=error msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" failed" error="failed to destroy network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:55:12.368670 kubelet[3571]: E0424 23:55:12.368615 3571 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:12.368815 kubelet[3571]: E0424 23:55:12.368687 3571 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599"} Apr 24 23:55:12.368815 kubelet[3571]: E0424 23:55:12.368734 3571 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"857384f3-d2e7-446d-9732-a43f27f17e84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:55:12.368815 kubelet[3571]: E0424 23:55:12.368770 3571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"857384f3-d2e7-446d-9732-a43f27f17e84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-5rfzw" podUID="857384f3-d2e7-446d-9732-a43f27f17e84" Apr 24 23:55:12.380415 kubelet[3571]: I0424 23:55:12.380369 3571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:55:13.186854 systemd[1]: run-containerd-runc-k8s.io-48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb-runc.GvqGeB.mount: Deactivated successfully. Apr 24 23:55:13.284781 containerd[2101]: time="2026-04-24T23:55:13.284735169Z" level=info msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.460 [INFO][4848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.461 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" iface="eth0" netns="/var/run/netns/cni-616c5e21-e7a9-d421-cfd5-c7c8de079a90" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.461 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" iface="eth0" netns="/var/run/netns/cni-616c5e21-e7a9-d421-cfd5-c7c8de079a90" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.464 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" iface="eth0" netns="/var/run/netns/cni-616c5e21-e7a9-d421-cfd5-c7c8de079a90" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.464 [INFO][4848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.464 [INFO][4848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.523 [INFO][4863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.524 [INFO][4863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.524 [INFO][4863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.539 [WARNING][4863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.539 [INFO][4863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.541 [INFO][4863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:13.551731 containerd[2101]: 2026-04-24 23:55:13.547 [INFO][4848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:13.551731 containerd[2101]: time="2026-04-24T23:55:13.551236566Z" level=info msg="TearDown network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" successfully" Apr 24 23:55:13.551731 containerd[2101]: time="2026-04-24T23:55:13.551464254Z" level=info msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" returns successfully" Apr 24 23:55:13.563262 systemd[1]: run-netns-cni\x2d616c5e21\x2de7a9\x2dd421\x2dcfd5\x2dc7c8de079a90.mount: Deactivated successfully. Apr 24 23:55:13.662973 kubelet[3571]: I0424 23:55:13.662094 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-ca-bundle\") pod \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " Apr 24 23:55:13.662973 kubelet[3571]: I0424 23:55:13.662174 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-nginx-config\") pod \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " Apr 24 23:55:13.662973 kubelet[3571]: I0424 23:55:13.662214 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpwcq\" (UniqueName: \"kubernetes.io/projected/40571a1b-e447-4152-bb90-dfc32e8c5e7a-kube-api-access-rpwcq\") pod \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " Apr 24 23:55:13.662973 kubelet[3571]: I0424 23:55:13.662251 3571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-backend-key-pair\") pod \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\" (UID: \"40571a1b-e447-4152-bb90-dfc32e8c5e7a\") " Apr 24 23:55:13.678529 kubelet[3571]: I0424 23:55:13.677933 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "40571a1b-e447-4152-bb90-dfc32e8c5e7a" (UID: "40571a1b-e447-4152-bb90-dfc32e8c5e7a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:55:13.682671 kubelet[3571]: I0424 23:55:13.680931 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "40571a1b-e447-4152-bb90-dfc32e8c5e7a" (UID: "40571a1b-e447-4152-bb90-dfc32e8c5e7a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:55:13.682671 kubelet[3571]: I0424 23:55:13.675172 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "40571a1b-e447-4152-bb90-dfc32e8c5e7a" (UID: "40571a1b-e447-4152-bb90-dfc32e8c5e7a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:55:13.682878 kubelet[3571]: I0424 23:55:13.682672 3571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40571a1b-e447-4152-bb90-dfc32e8c5e7a-kube-api-access-rpwcq" (OuterVolumeSpecName: "kube-api-access-rpwcq") pod "40571a1b-e447-4152-bb90-dfc32e8c5e7a" (UID: "40571a1b-e447-4152-bb90-dfc32e8c5e7a"). InnerVolumeSpecName "kube-api-access-rpwcq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:55:13.683543 systemd[1]: var-lib-kubelet-pods-40571a1b\x2de447\x2d4152\x2dbb90\x2ddfc32e8c5e7a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:55:13.688346 systemd[1]: var-lib-kubelet-pods-40571a1b\x2de447\x2d4152\x2dbb90\x2ddfc32e8c5e7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpwcq.mount: Deactivated successfully. Apr 24 23:55:13.763462 kubelet[3571]: I0424 23:55:13.763406 3571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpwcq\" (UniqueName: \"kubernetes.io/projected/40571a1b-e447-4152-bb90-dfc32e8c5e7a-kube-api-access-rpwcq\") on node \"ip-172-31-23-136\" DevicePath \"\"" Apr 24 23:55:13.763462 kubelet[3571]: I0424 23:55:13.763461 3571 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-backend-key-pair\") on node \"ip-172-31-23-136\" DevicePath \"\"" Apr 24 23:55:13.763656 kubelet[3571]: I0424 23:55:13.763477 3571 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-whisker-ca-bundle\") on node \"ip-172-31-23-136\" DevicePath \"\"" Apr 24 23:55:13.763656 kubelet[3571]: I0424 23:55:13.763489 3571 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/40571a1b-e447-4152-bb90-dfc32e8c5e7a-nginx-config\") on node \"ip-172-31-23-136\" DevicePath \"\"" Apr 24 23:55:14.220648 systemd[1]: run-containerd-runc-k8s.io-48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb-runc.BvdwSP.mount: Deactivated successfully. Apr 24 23:55:14.366890 kubelet[3571]: I0424 23:55:14.366814 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cffe0886-2be8-48b4-8bc4-d557589a59b0-nginx-config\") pod \"whisker-74c88c6b9f-7qpbs\" (UID: \"cffe0886-2be8-48b4-8bc4-d557589a59b0\") " pod="calico-system/whisker-74c88c6b9f-7qpbs" Apr 24 23:55:14.367177 kubelet[3571]: I0424 23:55:14.367150 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cffe0886-2be8-48b4-8bc4-d557589a59b0-whisker-backend-key-pair\") pod \"whisker-74c88c6b9f-7qpbs\" (UID: \"cffe0886-2be8-48b4-8bc4-d557589a59b0\") " pod="calico-system/whisker-74c88c6b9f-7qpbs" Apr 24 23:55:14.367353 kubelet[3571]: I0424 23:55:14.367302 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvkd4\" (UniqueName: \"kubernetes.io/projected/cffe0886-2be8-48b4-8bc4-d557589a59b0-kube-api-access-hvkd4\") pod \"whisker-74c88c6b9f-7qpbs\" (UID: \"cffe0886-2be8-48b4-8bc4-d557589a59b0\") " pod="calico-system/whisker-74c88c6b9f-7qpbs" Apr 24 23:55:14.367462 kubelet[3571]: I0424 23:55:14.367370 3571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cffe0886-2be8-48b4-8bc4-d557589a59b0-whisker-ca-bundle\") pod \"whisker-74c88c6b9f-7qpbs\" (UID: \"cffe0886-2be8-48b4-8bc4-d557589a59b0\") " pod="calico-system/whisker-74c88c6b9f-7qpbs" Apr 24 23:55:14.609794 containerd[2101]: time="2026-04-24T23:55:14.609179138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c88c6b9f-7qpbs,Uid:cffe0886-2be8-48b4-8bc4-d557589a59b0,Namespace:calico-system,Attempt:0,}" Apr 24 23:55:14.737439 kubelet[3571]: I0424 23:55:14.736121 3571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40571a1b-e447-4152-bb90-dfc32e8c5e7a" path="/var/lib/kubelet/pods/40571a1b-e447-4152-bb90-dfc32e8c5e7a/volumes" Apr 24 23:55:14.959505 systemd-networkd[1655]: cali698f4594777: Link UP Apr 24 23:55:14.962523 systemd-networkd[1655]: cali698f4594777: Gained carrier Apr 24 23:55:14.980622 (udev-worker)[5023]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.712 [ERROR][4928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.748 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0 whisker-74c88c6b9f- calico-system cffe0886-2be8-48b4-8bc4-d557589a59b0 952 0 2026-04-24 23:55:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74c88c6b9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-136 whisker-74c88c6b9f-7qpbs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali698f4594777 [] [] }} ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.748 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.824 [INFO][4992] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" HandleID="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Workload="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.840 [INFO][4992] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" HandleID="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Workload="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"whisker-74c88c6b9f-7qpbs", "timestamp":"2026-04-24 23:55:14.824309014 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000297a20)} Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.841 [INFO][4992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.841 [INFO][4992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.841 [INFO][4992] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.849 [INFO][4992] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.858 [INFO][4992] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.863 [INFO][4992] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.867 [INFO][4992] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.871 [INFO][4992] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.875 [INFO][4992] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.877 [INFO][4992] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116 Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.885 [INFO][4992] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.896 [INFO][4992] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.896 [INFO][4992] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" host="ip-172-31-23-136" Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.896 [INFO][4992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:15.004292 containerd[2101]: 2026-04-24 23:55:14.896 [INFO][4992] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" HandleID="k8s-pod-network.6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Workload="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.899 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0", GenerateName:"whisker-74c88c6b9f-", Namespace:"calico-system", SelfLink:"", UID:"cffe0886-2be8-48b4-8bc4-d557589a59b0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c88c6b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"whisker-74c88c6b9f-7qpbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali698f4594777", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.899 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.65/32] ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.899 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali698f4594777 ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.964 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.970 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0", GenerateName:"whisker-74c88c6b9f-", Namespace:"calico-system", SelfLink:"", UID:"cffe0886-2be8-48b4-8bc4-d557589a59b0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c88c6b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116", Pod:"whisker-74c88c6b9f-7qpbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali698f4594777", MAC:"e2:51:82:5c:a7:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:15.010878 containerd[2101]: 2026-04-24 23:55:14.988 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116" Namespace="calico-system" Pod="whisker-74c88c6b9f-7qpbs" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--74c88c6b9f--7qpbs-eth0" Apr 24 23:55:15.168214 containerd[2101]: time="2026-04-24T23:55:15.167907474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:15.168214 containerd[2101]: time="2026-04-24T23:55:15.168014011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:15.168214 containerd[2101]: time="2026-04-24T23:55:15.168030349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:15.182487 containerd[2101]: time="2026-04-24T23:55:15.179313737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:15.400569 containerd[2101]: time="2026-04-24T23:55:15.400518983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c88c6b9f-7qpbs,Uid:cffe0886-2be8-48b4-8bc4-d557589a59b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116\"" Apr 24 23:55:15.406557 containerd[2101]: time="2026-04-24T23:55:15.406434173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:55:15.661338 kernel: calico-node[5005]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:55:16.633154 (udev-worker)[5022]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:55:16.644259 systemd-networkd[1655]: vxlan.calico: Link UP Apr 24 23:55:16.645113 systemd-networkd[1655]: vxlan.calico: Gained carrier Apr 24 23:55:16.764318 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:16.764358 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:16.769613 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:16.822636 systemd-networkd[1655]: cali698f4594777: Gained IPv6LL Apr 24 23:55:17.435927 containerd[2101]: time="2026-04-24T23:55:17.435861496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:17.437304 containerd[2101]: time="2026-04-24T23:55:17.437127974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:55:17.438695 containerd[2101]: time="2026-04-24T23:55:17.438640202Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:17.441894 containerd[2101]: time="2026-04-24T23:55:17.441838845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:17.443231 containerd[2101]: time="2026-04-24T23:55:17.442657814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.03618004s" Apr 24 23:55:17.443231 containerd[2101]: time="2026-04-24T23:55:17.442698318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:55:17.455300 containerd[2101]: time="2026-04-24T23:55:17.455072048Z" level=info msg="CreateContainer within sandbox \"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:55:17.479258 containerd[2101]: time="2026-04-24T23:55:17.479211826Z" level=info msg="CreateContainer within sandbox \"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf31b3dc159613c8dcf3963830d28550d0a34ad813035897bbd6fa403fd00cb4\"" Apr 24 23:55:17.480821 containerd[2101]: time="2026-04-24T23:55:17.480767881Z" level=info msg="StartContainer for \"bf31b3dc159613c8dcf3963830d28550d0a34ad813035897bbd6fa403fd00cb4\"" Apr 24 23:55:17.639012 containerd[2101]: time="2026-04-24T23:55:17.638953982Z" level=info msg="StartContainer for \"bf31b3dc159613c8dcf3963830d28550d0a34ad813035897bbd6fa403fd00cb4\" returns successfully" Apr 24 23:55:17.642221 containerd[2101]: time="2026-04-24T23:55:17.642177322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:55:18.487242 systemd-networkd[1655]: vxlan.calico: Gained IPv6LL Apr 24 23:55:18.806372 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:18.809903 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:18.806422 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:19.653376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105433254.mount: Deactivated successfully. Apr 24 23:55:19.673506 containerd[2101]: time="2026-04-24T23:55:19.673458405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:19.674773 containerd[2101]: time="2026-04-24T23:55:19.674578360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:55:19.676360 containerd[2101]: time="2026-04-24T23:55:19.675980521Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:19.679160 containerd[2101]: time="2026-04-24T23:55:19.679128336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:19.680212 containerd[2101]: time="2026-04-24T23:55:19.680179991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.037957129s" Apr 24 23:55:19.680832 containerd[2101]: time="2026-04-24T23:55:19.680311871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:55:19.686647 containerd[2101]: time="2026-04-24T23:55:19.686573532Z" level=info msg="CreateContainer within sandbox \"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:55:19.712296 containerd[2101]: time="2026-04-24T23:55:19.710229664Z" level=info msg="CreateContainer within sandbox \"6bb42a8338326c906d9680bd8f86d438d4b019ad0006842ba06d155d95e87116\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0387a942d232589631d594aac62802efc350171fa7020aadb9c402cafc88845d\"" Apr 24 23:55:19.715291 containerd[2101]: time="2026-04-24T23:55:19.715229180Z" level=info msg="StartContainer for \"0387a942d232589631d594aac62802efc350171fa7020aadb9c402cafc88845d\"" Apr 24 23:55:19.808598 containerd[2101]: time="2026-04-24T23:55:19.808533183Z" level=info msg="StartContainer for \"0387a942d232589631d594aac62802efc350171fa7020aadb9c402cafc88845d\" returns successfully" Apr 24 23:55:20.595069 ntpd[2062]: Listen normally on 6 vxlan.calico 192.168.124.64:123 Apr 24 23:55:20.595165 ntpd[2062]: Listen normally on 7 cali698f4594777 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:55:20.599394 ntpd[2062]: 24 Apr 23:55:20 ntpd[2062]: Listen normally on 6 vxlan.calico 192.168.124.64:123 Apr 24 23:55:20.599394 ntpd[2062]: 24 Apr 23:55:20 ntpd[2062]: Listen normally on 7 cali698f4594777 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 24 23:55:20.599394 ntpd[2062]: 24 Apr 23:55:20 ntpd[2062]: Listen normally on 8 vxlan.calico [fe80::64c4:5bff:fe9d:8c0%5]:123 Apr 24 23:55:20.595226 ntpd[2062]: Listen normally on 8 vxlan.calico [fe80::64c4:5bff:fe9d:8c0%5]:123 Apr 24 23:55:23.729728 containerd[2101]: time="2026-04-24T23:55:23.729308281Z" level=info msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" Apr 24 23:55:23.845567 kubelet[3571]: I0424 23:55:23.845344 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74c88c6b9f-7qpbs" podStartSLOduration=5.569622638 podStartE2EDuration="9.845310008s" podCreationTimestamp="2026-04-24 23:55:14 +0000 UTC" firstStartedPulling="2026-04-24 23:55:15.405830431 +0000 UTC m=+48.873979287" lastFinishedPulling="2026-04-24 23:55:19.6815178 +0000 UTC m=+53.149666657" observedRunningTime="2026-04-24 23:55:20.336895737 +0000 UTC m=+53.805044589" watchObservedRunningTime="2026-04-24 23:55:23.845310008 +0000 UTC m=+57.313458871" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.843 [INFO][5307] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.844 [INFO][5307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" iface="eth0" netns="/var/run/netns/cni-2fd4bb50-972b-3656-d184-9e6cd5fd6244" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.846 [INFO][5307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" iface="eth0" netns="/var/run/netns/cni-2fd4bb50-972b-3656-d184-9e6cd5fd6244" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.846 [INFO][5307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" iface="eth0" netns="/var/run/netns/cni-2fd4bb50-972b-3656-d184-9e6cd5fd6244" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.846 [INFO][5307] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.847 [INFO][5307] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.880 [INFO][5314] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.880 [INFO][5314] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.880 [INFO][5314] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.888 [WARNING][5314] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.888 [INFO][5314] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.889 [INFO][5314] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:23.895874 containerd[2101]: 2026-04-24 23:55:23.892 [INFO][5307] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:23.900059 containerd[2101]: time="2026-04-24T23:55:23.899928053Z" level=info msg="TearDown network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" successfully" Apr 24 23:55:23.900059 containerd[2101]: time="2026-04-24T23:55:23.900010363Z" level=info msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" returns successfully" Apr 24 23:55:23.901019 containerd[2101]: time="2026-04-24T23:55:23.900985727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5rfzw,Uid:857384f3-d2e7-446d-9732-a43f27f17e84,Namespace:calico-system,Attempt:1,}" Apr 24 23:55:23.903540 systemd[1]: run-netns-cni\x2d2fd4bb50\x2d972b\x2d3656\x2dd184\x2d9e6cd5fd6244.mount: Deactivated successfully. Apr 24 23:55:24.053818 systemd-networkd[1655]: calidc54e0ee47d: Link UP Apr 24 23:55:24.055455 systemd-networkd[1655]: calidc54e0ee47d: Gained carrier Apr 24 23:55:24.058454 (udev-worker)[5340]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:23.966 [INFO][5320] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0 goldmane-5b85766d88- calico-system 857384f3-d2e7-446d-9732-a43f27f17e84 995 0 2026-04-24 23:54:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-136 goldmane-5b85766d88-5rfzw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidc54e0ee47d [] [] }} ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:23.966 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:23.997 [INFO][5332] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" HandleID="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.005 [INFO][5332] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" HandleID="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"goldmane-5b85766d88-5rfzw", "timestamp":"2026-04-24 23:55:23.997554285 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c91e0)} Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.005 [INFO][5332] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.006 [INFO][5332] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.006 [INFO][5332] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.010 [INFO][5332] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.016 [INFO][5332] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.024 [INFO][5332] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.026 [INFO][5332] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.028 [INFO][5332] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.029 [INFO][5332] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.030 [INFO][5332] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985 Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.035 [INFO][5332] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.044 [INFO][5332] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.045 [INFO][5332] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" host="ip-172-31-23-136" Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.045 [INFO][5332] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:24.081626 containerd[2101]: 2026-04-24 23:55:24.045 [INFO][5332] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" HandleID="k8s-pod-network.4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.048 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"857384f3-d2e7-446d-9732-a43f27f17e84", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"goldmane-5b85766d88-5rfzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc54e0ee47d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.048 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.66/32] ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.048 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc54e0ee47d ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.057 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.057 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"857384f3-d2e7-446d-9732-a43f27f17e84", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985", Pod:"goldmane-5b85766d88-5rfzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc54e0ee47d", MAC:"4e:6a:c2:b1:9d:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:24.082633 containerd[2101]: 2026-04-24 23:55:24.075 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985" Namespace="calico-system" Pod="goldmane-5b85766d88-5rfzw" WorkloadEndpoint="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:24.122018 containerd[2101]: time="2026-04-24T23:55:24.115793944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:24.122018 containerd[2101]: time="2026-04-24T23:55:24.115890263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:24.122018 containerd[2101]: time="2026-04-24T23:55:24.115909859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:24.122018 containerd[2101]: time="2026-04-24T23:55:24.117517955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:24.254952 containerd[2101]: time="2026-04-24T23:55:24.254898969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5rfzw,Uid:857384f3-d2e7-446d-9732-a43f27f17e84,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985\"" Apr 24 23:55:24.257426 containerd[2101]: time="2026-04-24T23:55:24.257146749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:55:24.730486 containerd[2101]: time="2026-04-24T23:55:24.728741217Z" level=info msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" Apr 24 23:55:24.735191 containerd[2101]: time="2026-04-24T23:55:24.735154926Z" level=info msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" iface="eth0" netns="/var/run/netns/cni-77d4ffed-c212-81fe-1c01-648662ed2f7a" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" iface="eth0" netns="/var/run/netns/cni-77d4ffed-c212-81fe-1c01-648662ed2f7a" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" iface="eth0" netns="/var/run/netns/cni-77d4ffed-c212-81fe-1c01-648662ed2f7a" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.817 [INFO][5424] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.860 [INFO][5437] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.860 [INFO][5437] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.860 [INFO][5437] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.870 [WARNING][5437] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.870 [INFO][5437] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.872 [INFO][5437] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:24.878948 containerd[2101]: 2026-04-24 23:55:24.874 [INFO][5424] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:24.878948 containerd[2101]: time="2026-04-24T23:55:24.877606119Z" level=info msg="TearDown network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" successfully" Apr 24 23:55:24.878948 containerd[2101]: time="2026-04-24T23:55:24.877635771Z" level=info msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" returns successfully" Apr 24 23:55:24.878948 containerd[2101]: time="2026-04-24T23:55:24.878320942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5597f658fb-6hcjb,Uid:6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6,Namespace:calico-system,Attempt:1,}" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.826 [INFO][5425] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.827 [INFO][5425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" iface="eth0" netns="/var/run/netns/cni-2e413775-f32b-152f-5ae5-4c65415a1b87" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.827 [INFO][5425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" iface="eth0" netns="/var/run/netns/cni-2e413775-f32b-152f-5ae5-4c65415a1b87" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.828 [INFO][5425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" iface="eth0" netns="/var/run/netns/cni-2e413775-f32b-152f-5ae5-4c65415a1b87" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.828 [INFO][5425] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.828 [INFO][5425] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.875 [INFO][5442] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.875 [INFO][5442] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.875 [INFO][5442] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.886 [WARNING][5442] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.886 [INFO][5442] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.890 [INFO][5442] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:24.898568 containerd[2101]: 2026-04-24 23:55:24.893 [INFO][5425] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:24.886399 systemd[1]: run-netns-cni\x2d77d4ffed\x2dc212\x2d81fe\x2d1c01\x2d648662ed2f7a.mount: Deactivated successfully. Apr 24 23:55:24.906667 containerd[2101]: time="2026-04-24T23:55:24.904873313Z" level=info msg="TearDown network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" successfully" Apr 24 23:55:24.906667 containerd[2101]: time="2026-04-24T23:55:24.904954140Z" level=info msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" returns successfully" Apr 24 23:55:24.914600 containerd[2101]: time="2026-04-24T23:55:24.911298050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-mclkn,Uid:9c70e107-88ef-4f1b-bea2-7693185d0306,Namespace:calico-system,Attempt:1,}" Apr 24 23:55:24.924130 systemd[1]: run-netns-cni\x2d2e413775\x2df32b\x2d152f\x2d5ae5\x2d4c65415a1b87.mount: Deactivated successfully. Apr 24 23:55:25.083875 systemd-networkd[1655]: cali6fc51567a67: Link UP Apr 24 23:55:25.090158 systemd-networkd[1655]: cali6fc51567a67: Gained carrier Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:24.954 [INFO][5453] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0 calico-kube-controllers-5597f658fb- calico-system 6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6 1002 0 2026-04-24 23:54:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5597f658fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-136 calico-kube-controllers-5597f658fb-6hcjb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6fc51567a67 [] [] }} ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:24.954 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.014 [INFO][5474] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" HandleID="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.028 [INFO][5474] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" HandleID="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd550), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"calico-kube-controllers-5597f658fb-6hcjb", "timestamp":"2026-04-24 23:55:25.01478022 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00026b600)} Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.028 [INFO][5474] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.028 [INFO][5474] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.028 [INFO][5474] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.031 [INFO][5474] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.038 [INFO][5474] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.046 [INFO][5474] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.048 [INFO][5474] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.052 [INFO][5474] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.052 [INFO][5474] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.056 [INFO][5474] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55 Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.062 [INFO][5474] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.072 [INFO][5474] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.072 [INFO][5474] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" host="ip-172-31-23-136" Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.072 [INFO][5474] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:25.120226 containerd[2101]: 2026-04-24 23:55:25.072 [INFO][5474] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" HandleID="k8s-pod-network.6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.078 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0", GenerateName:"calico-kube-controllers-5597f658fb-", Namespace:"calico-system", SelfLink:"", UID:"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5597f658fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"calico-kube-controllers-5597f658fb-6hcjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fc51567a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.078 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.67/32] ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.078 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc51567a67 ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.089 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.090 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0", GenerateName:"calico-kube-controllers-5597f658fb-", Namespace:"calico-system", SelfLink:"", UID:"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5597f658fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55", Pod:"calico-kube-controllers-5597f658fb-6hcjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fc51567a67", MAC:"66:1a:e8:db:e2:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:25.122760 containerd[2101]: 2026-04-24 23:55:25.114 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55" Namespace="calico-system" Pod="calico-kube-controllers-5597f658fb-6hcjb" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:25.143650 systemd-networkd[1655]: calidc54e0ee47d: Gained IPv6LL Apr 24 23:55:25.200935 containerd[2101]: time="2026-04-24T23:55:25.188414406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:25.200935 containerd[2101]: time="2026-04-24T23:55:25.188489119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:25.200935 containerd[2101]: time="2026-04-24T23:55:25.188507178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:25.200935 containerd[2101]: time="2026-04-24T23:55:25.188615976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:25.243661 systemd-networkd[1655]: cali87e5f7d556f: Link UP Apr 24 23:55:25.247856 systemd-networkd[1655]: cali87e5f7d556f: Gained carrier Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.031 [INFO][5466] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0 calico-apiserver-65f886c557- calico-system 9c70e107-88ef-4f1b-bea2-7693185d0306 1003 0 2026-04-24 23:54:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f886c557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-136 calico-apiserver-65f886c557-mclkn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali87e5f7d556f [] [] }} ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.031 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.085 [INFO][5485] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" HandleID="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.108 [INFO][5485] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" HandleID="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000303f10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"calico-apiserver-65f886c557-mclkn", "timestamp":"2026-04-24 23:55:25.085682875 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000359080)} Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.108 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.108 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.108 [INFO][5485] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.132 [INFO][5485] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.142 [INFO][5485] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.163 [INFO][5485] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.167 [INFO][5485] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.171 [INFO][5485] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.171 [INFO][5485] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.179 [INFO][5485] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0 Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.187 [INFO][5485] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.221 [INFO][5485] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.221 [INFO][5485] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" host="ip-172-31-23-136" Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.221 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:25.283140 containerd[2101]: 2026-04-24 23:55:25.221 [INFO][5485] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" HandleID="k8s-pod-network.15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.236 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"9c70e107-88ef-4f1b-bea2-7693185d0306", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"calico-apiserver-65f886c557-mclkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87e5f7d556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.236 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.68/32] ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.237 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87e5f7d556f ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.249 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.251 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"9c70e107-88ef-4f1b-bea2-7693185d0306", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0", Pod:"calico-apiserver-65f886c557-mclkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87e5f7d556f", MAC:"42:7e:eb:74:dd:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:25.284557 containerd[2101]: 2026-04-24 23:55:25.272 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0" Namespace="calico-system" Pod="calico-apiserver-65f886c557-mclkn" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:25.358353 containerd[2101]: time="2026-04-24T23:55:25.356633608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:25.358353 containerd[2101]: time="2026-04-24T23:55:25.356704854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:25.358353 containerd[2101]: time="2026-04-24T23:55:25.356729117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:25.358353 containerd[2101]: time="2026-04-24T23:55:25.356844239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:25.418001 containerd[2101]: time="2026-04-24T23:55:25.417951390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5597f658fb-6hcjb,Uid:6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55\"" Apr 24 23:55:25.510861 containerd[2101]: time="2026-04-24T23:55:25.510456351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-mclkn,Uid:9c70e107-88ef-4f1b-bea2-7693185d0306,Namespace:calico-system,Attempt:1,} returns sandbox id \"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0\"" Apr 24 23:55:25.728570 containerd[2101]: time="2026-04-24T23:55:25.728177837Z" level=info msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.836 [INFO][5618] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.837 [INFO][5618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" iface="eth0" netns="/var/run/netns/cni-4832e9fe-cd21-4cc0-1d63-0d2ec41715c3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.838 [INFO][5618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" iface="eth0" netns="/var/run/netns/cni-4832e9fe-cd21-4cc0-1d63-0d2ec41715c3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.838 [INFO][5618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" iface="eth0" netns="/var/run/netns/cni-4832e9fe-cd21-4cc0-1d63-0d2ec41715c3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.838 [INFO][5618] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.839 [INFO][5618] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.889 [INFO][5625] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.889 [INFO][5625] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.889 [INFO][5625] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.916 [WARNING][5625] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.916 [INFO][5625] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.920 [INFO][5625] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:25.931105 containerd[2101]: 2026-04-24 23:55:25.925 [INFO][5618] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:25.935411 containerd[2101]: time="2026-04-24T23:55:25.932451504Z" level=info msg="TearDown network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" successfully" Apr 24 23:55:25.935411 containerd[2101]: time="2026-04-24T23:55:25.932487449Z" level=info msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" returns successfully" Apr 24 23:55:25.935411 containerd[2101]: time="2026-04-24T23:55:25.933511220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-5hqq5,Uid:481e03e3-267c-445f-b620-060c178d7beb,Namespace:calico-system,Attempt:1,}" Apr 24 23:55:25.940130 systemd[1]: run-netns-cni\x2d4832e9fe\x2dcd21\x2d4cc0\x2d1d63\x2d0d2ec41715c3.mount: Deactivated successfully. Apr 24 23:55:26.288570 systemd-networkd[1655]: calia02e669c41a: Link UP Apr 24 23:55:26.290574 systemd-networkd[1655]: calia02e669c41a: Gained carrier Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.080 [INFO][5632] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0 calico-apiserver-65f886c557- calico-system 481e03e3-267c-445f-b620-060c178d7beb 1016 0 2026-04-24 23:54:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f886c557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-136 calico-apiserver-65f886c557-5hqq5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia02e669c41a [] [] }} ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.080 [INFO][5632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.190 [INFO][5648] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" HandleID="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.209 [INFO][5648] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" HandleID="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"calico-apiserver-65f886c557-5hqq5", "timestamp":"2026-04-24 23:55:26.190483833 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001862c0)} Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.210 [INFO][5648] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.212 [INFO][5648] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.212 [INFO][5648] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.218 [INFO][5648] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.225 [INFO][5648] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.230 [INFO][5648] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.235 [INFO][5648] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.240 [INFO][5648] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.240 [INFO][5648] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.242 [INFO][5648] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2 Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.249 [INFO][5648] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.272 [INFO][5648] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.272 [INFO][5648] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" host="ip-172-31-23-136" Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.272 [INFO][5648] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:26.323692 containerd[2101]: 2026-04-24 23:55:26.272 [INFO][5648] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" HandleID="k8s-pod-network.974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.277 [INFO][5632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"481e03e3-267c-445f-b620-060c178d7beb", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"calico-apiserver-65f886c557-5hqq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia02e669c41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.277 [INFO][5632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.69/32] ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.277 [INFO][5632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia02e669c41a ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.291 [INFO][5632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.292 [INFO][5632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"481e03e3-267c-445f-b620-060c178d7beb", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2", Pod:"calico-apiserver-65f886c557-5hqq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia02e669c41a", MAC:"8a:94:a4:5e:29:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:26.325156 containerd[2101]: 2026-04-24 23:55:26.315 [INFO][5632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2" Namespace="calico-system" Pod="calico-apiserver-65f886c557-5hqq5" WorkloadEndpoint="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:26.416152 containerd[2101]: time="2026-04-24T23:55:26.416051365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:26.416743 containerd[2101]: time="2026-04-24T23:55:26.416702527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:26.416918 containerd[2101]: time="2026-04-24T23:55:26.416886941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:26.417538 containerd[2101]: time="2026-04-24T23:55:26.417413459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:26.521313 containerd[2101]: time="2026-04-24T23:55:26.521247493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f886c557-5hqq5,Uid:481e03e3-267c-445f-b620-060c178d7beb,Namespace:calico-system,Attempt:1,} returns sandbox id \"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2\"" Apr 24 23:55:26.747008 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:26.742555 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:26.742591 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:26.787193 containerd[2101]: time="2026-04-24T23:55:26.787112419Z" level=info msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" Apr 24 23:55:26.790150 containerd[2101]: time="2026-04-24T23:55:26.790100549Z" level=info msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" Apr 24 23:55:26.842173 containerd[2101]: time="2026-04-24T23:55:26.842072847Z" level=info msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" Apr 24 23:55:26.872301 containerd[2101]: time="2026-04-24T23:55:26.871970980Z" level=info msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" Apr 24 23:55:26.936108 systemd-networkd[1655]: cali6fc51567a67: Gained IPv6LL Apr 24 23:55:27.190713 systemd-networkd[1655]: cali87e5f7d556f: Gained IPv6LL Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.007 [INFO][5738] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.009 [INFO][5738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" iface="eth0" netns="/var/run/netns/cni-813e0082-1850-ef98-49b0-3cdf9f995f88" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.009 [INFO][5738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" iface="eth0" netns="/var/run/netns/cni-813e0082-1850-ef98-49b0-3cdf9f995f88" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.009 [INFO][5738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" iface="eth0" netns="/var/run/netns/cni-813e0082-1850-ef98-49b0-3cdf9f995f88" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.009 [INFO][5738] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.009 [INFO][5738] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.185 [INFO][5781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.189 [INFO][5781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.189 [INFO][5781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.221 [WARNING][5781] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.221 [INFO][5781] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.225 [INFO][5781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:27.267296 containerd[2101]: 2026-04-24 23:55:27.243 [INFO][5738] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:55:27.272170 containerd[2101]: time="2026-04-24T23:55:27.271426111Z" level=info msg="TearDown network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" successfully" Apr 24 23:55:27.272170 containerd[2101]: time="2026-04-24T23:55:27.271464822Z" level=info msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" returns successfully" Apr 24 23:55:27.274747 containerd[2101]: time="2026-04-24T23:55:27.274592948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clgnl,Uid:54f65b93-ac7d-4a34-935e-59195780993c,Namespace:calico-system,Attempt:1,}" Apr 24 23:55:27.281331 systemd[1]: run-netns-cni\x2d813e0082\x2d1850\x2def98\x2d49b0\x2d3cdf9f995f88.mount: Deactivated successfully. Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.062 [INFO][5737] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.062 [INFO][5737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" iface="eth0" netns="/var/run/netns/cni-4bd67a7c-2816-ea44-00ba-379633fee669" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.066 [INFO][5737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" iface="eth0" netns="/var/run/netns/cni-4bd67a7c-2816-ea44-00ba-379633fee669" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.067 [INFO][5737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" iface="eth0" netns="/var/run/netns/cni-4bd67a7c-2816-ea44-00ba-379633fee669" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.067 [INFO][5737] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.067 [INFO][5737] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.247 [INFO][5788] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.247 [INFO][5788] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.247 [INFO][5788] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.261 [WARNING][5788] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.261 [INFO][5788] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.263 [INFO][5788] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:27.298532 containerd[2101]: 2026-04-24 23:55:27.293 [INFO][5737] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:55:27.301859 containerd[2101]: time="2026-04-24T23:55:27.300966726Z" level=info msg="TearDown network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" successfully" Apr 24 23:55:27.301859 containerd[2101]: time="2026-04-24T23:55:27.300999762Z" level=info msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" returns successfully" Apr 24 23:55:27.310290 containerd[2101]: time="2026-04-24T23:55:27.309965311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dpzw,Uid:ed3d5ba2-1e47-4166-82c1-9c12137f6661,Namespace:kube-system,Attempt:1,}" Apr 24 23:55:27.311085 systemd[1]: run-netns-cni\x2d4bd67a7c\x2d2816\x2dea44\x2d00ba\x2d379633fee669.mount: Deactivated successfully. Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.137 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0", GenerateName:"calico-kube-controllers-5597f658fb-", Namespace:"calico-system", SelfLink:"", UID:"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5597f658fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55", Pod:"calico-kube-controllers-5597f658fb-6hcjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fc51567a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.142 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.143 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" iface="eth0" netns="" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.143 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.143 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.320 [INFO][5802] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.320 [INFO][5802] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.320 [INFO][5802] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.361 [WARNING][5802] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.368 [INFO][5802] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.373 [INFO][5802] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:27.405681 containerd[2101]: 2026-04-24 23:55:27.387 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:27.405681 containerd[2101]: time="2026-04-24T23:55:27.404928072Z" level=info msg="TearDown network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" successfully" Apr 24 23:55:27.405681 containerd[2101]: time="2026-04-24T23:55:27.405019762Z" level=info msg="StopPodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" returns successfully" Apr 24 23:55:27.450293 containerd[2101]: time="2026-04-24T23:55:27.450098244Z" level=info msg="RemovePodSandbox for \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.104 [INFO][5760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.108 [INFO][5760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" iface="eth0" netns="/var/run/netns/cni-43e3bd7a-0b86-ed1f-cc6f-f34e948babce" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.110 [INFO][5760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" iface="eth0" netns="/var/run/netns/cni-43e3bd7a-0b86-ed1f-cc6f-f34e948babce" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.110 [INFO][5760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" iface="eth0" netns="/var/run/netns/cni-43e3bd7a-0b86-ed1f-cc6f-f34e948babce" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.110 [INFO][5760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.110 [INFO][5760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.412 [INFO][5795] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.413 [INFO][5795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.413 [INFO][5795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.434 [WARNING][5795] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.434 [INFO][5795] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.437 [INFO][5795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:27.456887 containerd[2101]: 2026-04-24 23:55:27.445 [INFO][5760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:55:27.459654 containerd[2101]: time="2026-04-24T23:55:27.459330123Z" level=info msg="TearDown network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" successfully" Apr 24 23:55:27.459654 containerd[2101]: time="2026-04-24T23:55:27.459368897Z" level=info msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" returns successfully" Apr 24 23:55:27.461637 containerd[2101]: time="2026-04-24T23:55:27.461592919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4c8vb,Uid:527ec70e-1bb6-4d30-8070-55e2af7c2275,Namespace:kube-system,Attempt:1,}" Apr 24 23:55:27.466255 containerd[2101]: time="2026-04-24T23:55:27.466207342Z" level=info msg="Forcibly stopping sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\"" Apr 24 23:55:27.831150 systemd-networkd[1655]: calia02e669c41a: Gained IPv6LL Apr 24 23:55:27.848864 systemd-networkd[1655]: calid6f0df2db66: Link UP Apr 24 23:55:27.852731 systemd-networkd[1655]: calid6f0df2db66: Gained carrier Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.574 [INFO][5834] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0 coredns-674b8bbfcf- kube-system ed3d5ba2-1e47-4166-82c1-9c12137f6661 1026 0 2026-04-24 23:54:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-136 coredns-674b8bbfcf-6dpzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6f0df2db66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.575 [INFO][5834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.723 [INFO][5877] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" HandleID="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.741 [INFO][5877] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" HandleID="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-136", "pod":"coredns-674b8bbfcf-6dpzw", "timestamp":"2026-04-24 23:55:27.723703451 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00033a000)} Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.741 [INFO][5877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.742 [INFO][5877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.742 [INFO][5877] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.749 [INFO][5877] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.761 [INFO][5877] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.777 [INFO][5877] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.781 [INFO][5877] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.786 [INFO][5877] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.787 [INFO][5877] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.790 [INFO][5877] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.801 [INFO][5877] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.819 [INFO][5877] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.819 [INFO][5877] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" host="ip-172-31-23-136" Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.819 [INFO][5877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:27.905865 containerd[2101]: 2026-04-24 23:55:27.819 [INFO][5877] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" HandleID="k8s-pod-network.29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.828 [INFO][5834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed3d5ba2-1e47-4166-82c1-9c12137f6661", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"coredns-674b8bbfcf-6dpzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6f0df2db66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.829 [INFO][5834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.70/32] ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.830 [INFO][5834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6f0df2db66 ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.860 [INFO][5834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.865 [INFO][5834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed3d5ba2-1e47-4166-82c1-9c12137f6661", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be", Pod:"coredns-674b8bbfcf-6dpzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6f0df2db66", MAC:"06:bc:08:85:b5:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:27.906891 containerd[2101]: 2026-04-24 23:55:27.889 [INFO][5834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dpzw" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:55:27.922770 systemd[1]: run-netns-cni\x2d43e3bd7a\x2d0b86\x2ded1f\x2dcc6f\x2df34e948babce.mount: Deactivated successfully. Apr 24 23:55:27.986445 systemd-networkd[1655]: cali3b5907b1d2a: Link UP Apr 24 23:55:27.991131 systemd-networkd[1655]: cali3b5907b1d2a: Gained carrier Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.652 [WARNING][5857] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0", GenerateName:"calico-kube-controllers-5597f658fb-", Namespace:"calico-system", SelfLink:"", UID:"6156ce53-52c1-4a6f-b1e7-3bdd3b1076e6", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5597f658fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55", Pod:"calico-kube-controllers-5597f658fb-6hcjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fc51567a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.652 [INFO][5857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.652 [INFO][5857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" iface="eth0" netns="" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.652 [INFO][5857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.652 [INFO][5857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.765 [INFO][5889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.775 [INFO][5889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.939 [INFO][5889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.975 [WARNING][5889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.975 [INFO][5889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" HandleID="k8s-pod-network.189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Workload="ip--172--31--23--136-k8s-calico--kube--controllers--5597f658fb--6hcjb-eth0" Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:27.978 [INFO][5889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:28.031241 containerd[2101]: 2026-04-24 23:55:28.009 [INFO][5857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c" Apr 24 23:55:28.031948 containerd[2101]: time="2026-04-24T23:55:28.031399994Z" level=info msg="TearDown network for sandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" successfully" Apr 24 23:55:28.033984 containerd[2101]: time="2026-04-24T23:55:28.032702319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:28.033984 containerd[2101]: time="2026-04-24T23:55:28.032789919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:28.033984 containerd[2101]: time="2026-04-24T23:55:28.032815005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.042533 containerd[2101]: time="2026-04-24T23:55:28.040959517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.586 [INFO][5824] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0 csi-node-driver- calico-system 54f65b93-ac7d-4a34-935e-59195780993c 1025 0 2026-04-24 23:54:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-136 csi-node-driver-clgnl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b5907b1d2a [] [] }} ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.586 [INFO][5824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.731 [INFO][5882] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" HandleID="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.744 [INFO][5882] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" HandleID="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-136", "pod":"csi-node-driver-clgnl", "timestamp":"2026-04-24 23:55:27.731683021 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001866e0)} Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.744 [INFO][5882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.821 [INFO][5882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.821 [INFO][5882] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.856 [INFO][5882] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.877 [INFO][5882] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.908 [INFO][5882] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.913 [INFO][5882] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.917 [INFO][5882] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.917 [INFO][5882] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.919 [INFO][5882] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759 Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.926 [INFO][5882] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.938 [INFO][5882] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.71/26] block=192.168.124.64/26 handle="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.939 [INFO][5882] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.71/26] handle="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" host="ip-172-31-23-136" Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.939 [INFO][5882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:28.045315 containerd[2101]: 2026-04-24 23:55:27.939 [INFO][5882] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.71/26] IPv6=[] ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" HandleID="k8s-pod-network.1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:27.978 [INFO][5824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54f65b93-ac7d-4a34-935e-59195780993c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"csi-node-driver-clgnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5907b1d2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:27.978 [INFO][5824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.71/32] ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:27.978 [INFO][5824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5907b1d2a ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:27.990 [INFO][5824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:27.996 [INFO][5824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54f65b93-ac7d-4a34-935e-59195780993c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759", Pod:"csi-node-driver-clgnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5907b1d2a", MAC:"12:61:90:61:26:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.046343 containerd[2101]: 2026-04-24 23:55:28.020 [INFO][5824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759" Namespace="calico-system" Pod="csi-node-driver-clgnl" WorkloadEndpoint="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:55:28.111834 containerd[2101]: time="2026-04-24T23:55:28.106948361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:55:28.111834 containerd[2101]: time="2026-04-24T23:55:28.107040948Z" level=info msg="RemovePodSandbox \"189264caf0e3cfbddbc22aa3584c4b29afb3bfc4e712d7c352be4734ca6a979c\" returns successfully" Apr 24 23:55:28.122330 containerd[2101]: time="2026-04-24T23:55:28.122069108Z" level=info msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" Apr 24 23:55:28.127037 systemd-networkd[1655]: calie4eaf59c038: Link UP Apr 24 23:55:28.131698 systemd-networkd[1655]: calie4eaf59c038: Gained carrier Apr 24 23:55:28.172002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952266491.mount: Deactivated successfully. Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.685 [INFO][5862] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0 coredns-674b8bbfcf- kube-system 527ec70e-1bb6-4d30-8070-55e2af7c2275 1027 0 2026-04-24 23:54:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-136 coredns-674b8bbfcf-4c8vb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4eaf59c038 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.686 [INFO][5862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.800 [INFO][5896] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" HandleID="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.822 [INFO][5896] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" HandleID="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036ac90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-136", "pod":"coredns-674b8bbfcf-4c8vb", "timestamp":"2026-04-24 23:55:27.800332897 +0000 UTC"}, Hostname:"ip-172-31-23-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001866e0)} Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.823 [INFO][5896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.982 [INFO][5896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.982 [INFO][5896] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-136' Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.986 [INFO][5896] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:27.994 [INFO][5896] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.018 [INFO][5896] ipam/ipam.go 526: Trying affinity for 192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.023 [INFO][5896] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.030 [INFO][5896] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.030 [INFO][5896] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.036 [INFO][5896] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003 Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.047 [INFO][5896] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.066 [INFO][5896] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.72/26] block=192.168.124.64/26 handle="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.066 [INFO][5896] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.72/26] handle="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" host="ip-172-31-23-136" Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.066 [INFO][5896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:28.273888 containerd[2101]: 2026-04-24 23:55:28.066 [INFO][5896] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.72/26] IPv6=[] ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" HandleID="k8s-pod-network.fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.096 [INFO][5862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"527ec70e-1bb6-4d30-8070-55e2af7c2275", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"", Pod:"coredns-674b8bbfcf-4c8vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4eaf59c038", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.097 [INFO][5862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.72/32] ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.097 [INFO][5862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4eaf59c038 ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.138 [INFO][5862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.146 [INFO][5862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"527ec70e-1bb6-4d30-8070-55e2af7c2275", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003", Pod:"coredns-674b8bbfcf-4c8vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4eaf59c038", MAC:"c2:4f:23:1a:6a:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.275439 containerd[2101]: 2026-04-24 23:55:28.202 [INFO][5862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003" Namespace="kube-system" Pod="coredns-674b8bbfcf-4c8vb" WorkloadEndpoint="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:55:28.319264 containerd[2101]: time="2026-04-24T23:55:28.316863688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:28.319264 containerd[2101]: time="2026-04-24T23:55:28.316936713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:28.319264 containerd[2101]: time="2026-04-24T23:55:28.316973561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.319264 containerd[2101]: time="2026-04-24T23:55:28.317102126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.409969 containerd[2101]: time="2026-04-24T23:55:28.409919925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dpzw,Uid:ed3d5ba2-1e47-4166-82c1-9c12137f6661,Namespace:kube-system,Attempt:1,} returns sandbox id \"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be\"" Apr 24 23:55:28.451326 containerd[2101]: time="2026-04-24T23:55:28.442711627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:55:28.451326 containerd[2101]: time="2026-04-24T23:55:28.443838031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:55:28.451326 containerd[2101]: time="2026-04-24T23:55:28.443875518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.451326 containerd[2101]: time="2026-04-24T23:55:28.444018329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:55:28.465148 containerd[2101]: time="2026-04-24T23:55:28.465110407Z" level=info msg="CreateContainer within sandbox \"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:55:28.543773 containerd[2101]: time="2026-04-24T23:55:28.543721388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clgnl,Uid:54f65b93-ac7d-4a34-935e-59195780993c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759\"" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.338 [WARNING][5993] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"481e03e3-267c-445f-b620-060c178d7beb", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2", Pod:"calico-apiserver-65f886c557-5hqq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia02e669c41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.339 [INFO][5993] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.339 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" iface="eth0" netns="" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.339 [INFO][5993] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.339 [INFO][5993] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.562 [INFO][6032] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.567 [INFO][6032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.567 [INFO][6032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.581 [WARNING][6032] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.581 [INFO][6032] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.584 [INFO][6032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:28.599976 containerd[2101]: 2026-04-24 23:55:28.589 [INFO][5993] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.604373 containerd[2101]: time="2026-04-24T23:55:28.603047595Z" level=info msg="TearDown network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" successfully" Apr 24 23:55:28.604373 containerd[2101]: time="2026-04-24T23:55:28.603080481Z" level=info msg="StopPodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" returns successfully" Apr 24 23:55:28.604373 containerd[2101]: time="2026-04-24T23:55:28.603755009Z" level=info msg="RemovePodSandbox for \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" Apr 24 23:55:28.604373 containerd[2101]: time="2026-04-24T23:55:28.603785402Z" level=info msg="Forcibly stopping sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\"" Apr 24 23:55:28.615365 containerd[2101]: time="2026-04-24T23:55:28.615324730Z" level=info msg="CreateContainer within sandbox \"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"916b11886c68abf85d7378fcf4bfaf827c9606b25e6f798d205aa9a79d69a8d6\"" Apr 24 23:55:28.622302 containerd[2101]: time="2026-04-24T23:55:28.621812689Z" level=info msg="StartContainer for \"916b11886c68abf85d7378fcf4bfaf827c9606b25e6f798d205aa9a79d69a8d6\"" Apr 24 23:55:28.636748 containerd[2101]: time="2026-04-24T23:55:28.636677191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4c8vb,Uid:527ec70e-1bb6-4d30-8070-55e2af7c2275,Namespace:kube-system,Attempt:1,} returns sandbox id \"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003\"" Apr 24 23:55:28.656513 containerd[2101]: time="2026-04-24T23:55:28.656451569Z" level=info msg="CreateContainer within sandbox \"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:55:28.689309 containerd[2101]: time="2026-04-24T23:55:28.689151444Z" level=info msg="CreateContainer within sandbox \"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd14ac35d0a0a8c6af075498edb1448a27089c604f3bc96af8975b5795c38dbf\"" Apr 24 23:55:28.694129 containerd[2101]: time="2026-04-24T23:55:28.694086727Z" level=info msg="StartContainer for \"cd14ac35d0a0a8c6af075498edb1448a27089c604f3bc96af8975b5795c38dbf\"" Apr 24 23:55:28.734954 containerd[2101]: time="2026-04-24T23:55:28.734820696Z" level=info msg="StartContainer for \"916b11886c68abf85d7378fcf4bfaf827c9606b25e6f798d205aa9a79d69a8d6\" returns successfully" Apr 24 23:55:28.794947 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:28.791602 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:28.791630 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.803 [WARNING][6126] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"481e03e3-267c-445f-b620-060c178d7beb", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2", Pod:"calico-apiserver-65f886c557-5hqq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia02e669c41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.803 [INFO][6126] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.803 [INFO][6126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" iface="eth0" netns="" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.803 [INFO][6126] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.803 [INFO][6126] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.869 [INFO][6166] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.869 [INFO][6166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.869 [INFO][6166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.880 [WARNING][6166] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.880 [INFO][6166] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" HandleID="k8s-pod-network.49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--5hqq5-eth0" Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.883 [INFO][6166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:28.893734 containerd[2101]: 2026-04-24 23:55:28.887 [INFO][6126] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3" Apr 24 23:55:28.895005 containerd[2101]: time="2026-04-24T23:55:28.893802800Z" level=info msg="TearDown network for sandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" successfully" Apr 24 23:55:28.901950 containerd[2101]: time="2026-04-24T23:55:28.901819939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:55:28.902622 containerd[2101]: time="2026-04-24T23:55:28.902133403Z" level=info msg="RemovePodSandbox \"49cfef66a32845ef426d9d18c3ea443150f2a4eee02bace843c2e73feb85b9e3\" returns successfully" Apr 24 23:55:28.904367 containerd[2101]: time="2026-04-24T23:55:28.903622745Z" level=info msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" Apr 24 23:55:28.916318 containerd[2101]: time="2026-04-24T23:55:28.916072746Z" level=info msg="StartContainer for \"cd14ac35d0a0a8c6af075498edb1448a27089c604f3bc96af8975b5795c38dbf\" returns successfully" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.013 [WARNING][6196] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"9c70e107-88ef-4f1b-bea2-7693185d0306", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0", Pod:"calico-apiserver-65f886c557-mclkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87e5f7d556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.014 [INFO][6196] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.014 [INFO][6196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" iface="eth0" netns="" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.014 [INFO][6196] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.014 [INFO][6196] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.066 [INFO][6204] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.067 [INFO][6204] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.067 [INFO][6204] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.076 [WARNING][6204] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.077 [INFO][6204] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.079 [INFO][6204] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:29.087171 containerd[2101]: 2026-04-24 23:55:29.083 [INFO][6196] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.087171 containerd[2101]: time="2026-04-24T23:55:29.086934063Z" level=info msg="TearDown network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" successfully" Apr 24 23:55:29.087171 containerd[2101]: time="2026-04-24T23:55:29.086991733Z" level=info msg="StopPodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" returns successfully" Apr 24 23:55:29.088467 containerd[2101]: time="2026-04-24T23:55:29.087778850Z" level=info msg="RemovePodSandbox for \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" Apr 24 23:55:29.088467 containerd[2101]: time="2026-04-24T23:55:29.087810482Z" level=info msg="Forcibly stopping sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\"" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.159 [WARNING][6224] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0", GenerateName:"calico-apiserver-65f886c557-", Namespace:"calico-system", SelfLink:"", UID:"9c70e107-88ef-4f1b-bea2-7693185d0306", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f886c557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0", Pod:"calico-apiserver-65f886c557-mclkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87e5f7d556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.160 [INFO][6224] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.160 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" iface="eth0" netns="" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.161 [INFO][6224] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.161 [INFO][6224] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.217 [INFO][6231] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.217 [INFO][6231] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.217 [INFO][6231] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.232 [WARNING][6231] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.232 [INFO][6231] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" HandleID="k8s-pod-network.3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Workload="ip--172--31--23--136-k8s-calico--apiserver--65f886c557--mclkn-eth0" Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.238 [INFO][6231] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:29.253948 containerd[2101]: 2026-04-24 23:55:29.245 [INFO][6224] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f" Apr 24 23:55:29.255257 containerd[2101]: time="2026-04-24T23:55:29.254683122Z" level=info msg="TearDown network for sandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" successfully" Apr 24 23:55:29.266209 containerd[2101]: time="2026-04-24T23:55:29.266150911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:55:29.266952 containerd[2101]: time="2026-04-24T23:55:29.266498991Z" level=info msg="RemovePodSandbox \"3ae87e0add256a7b54f8cce248204245c132ed8ffad41d894987d9ef367cd76f\" returns successfully" Apr 24 23:55:29.267508 containerd[2101]: time="2026-04-24T23:55:29.267258125Z" level=info msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" Apr 24 23:55:29.430610 systemd-networkd[1655]: calie4eaf59c038: Gained IPv6LL Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.369 [WARNING][6246] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.369 [INFO][6246] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.369 [INFO][6246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" iface="eth0" netns="" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.369 [INFO][6246] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.369 [INFO][6246] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.425 [INFO][6254] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.427 [INFO][6254] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.427 [INFO][6254] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.441 [WARNING][6254] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.441 [INFO][6254] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.443 [INFO][6254] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:29.455750 containerd[2101]: 2026-04-24 23:55:29.446 [INFO][6246] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.455750 containerd[2101]: time="2026-04-24T23:55:29.455348201Z" level=info msg="TearDown network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" successfully" Apr 24 23:55:29.455750 containerd[2101]: time="2026-04-24T23:55:29.455386308Z" level=info msg="StopPodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" returns successfully" Apr 24 23:55:29.459621 containerd[2101]: time="2026-04-24T23:55:29.456719160Z" level=info msg="RemovePodSandbox for \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" Apr 24 23:55:29.459621 containerd[2101]: time="2026-04-24T23:55:29.456759701Z" level=info msg="Forcibly stopping sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\"" Apr 24 23:55:29.530591 kubelet[3571]: I0424 23:55:29.530115 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4c8vb" podStartSLOduration=58.530086955 podStartE2EDuration="58.530086955s" podCreationTimestamp="2026-04-24 23:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:55:29.524025173 +0000 UTC m=+62.992174047" watchObservedRunningTime="2026-04-24 23:55:29.530086955 +0000 UTC m=+62.998235818" Apr 24 23:55:29.564177 kubelet[3571]: I0424 23:55:29.563590 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6dpzw" podStartSLOduration=58.558259768 podStartE2EDuration="58.558259768s" podCreationTimestamp="2026-04-24 23:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:55:29.554521632 +0000 UTC m=+63.022670503" watchObservedRunningTime="2026-04-24 23:55:29.558259768 +0000 UTC m=+63.026408633" Apr 24 23:55:29.622518 systemd-networkd[1655]: cali3b5907b1d2a: Gained IPv6LL Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.592 [WARNING][6272] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" WorkloadEndpoint="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.592 [INFO][6272] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.592 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" iface="eth0" netns="" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.592 [INFO][6272] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.592 [INFO][6272] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.662 [INFO][6281] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.663 [INFO][6281] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.664 [INFO][6281] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.676 [WARNING][6281] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.676 [INFO][6281] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" HandleID="k8s-pod-network.5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Workload="ip--172--31--23--136-k8s-whisker--7858979b86--2k95n-eth0" Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.678 [INFO][6281] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:29.691543 containerd[2101]: 2026-04-24 23:55:29.685 [INFO][6272] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e" Apr 24 23:55:29.691543 containerd[2101]: time="2026-04-24T23:55:29.691439300Z" level=info msg="TearDown network for sandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" successfully" Apr 24 23:55:29.699599 containerd[2101]: time="2026-04-24T23:55:29.699392706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:55:29.699599 containerd[2101]: time="2026-04-24T23:55:29.699487821Z" level=info msg="RemovePodSandbox \"5a20a4ebbba509810175a9e53bdef5af2874b2ea6a9d89b0248985e5714e708e\" returns successfully" Apr 24 23:55:29.701000 containerd[2101]: time="2026-04-24T23:55:29.700966521Z" level=info msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" Apr 24 23:55:29.814706 systemd-networkd[1655]: calid6f0df2db66: Gained IPv6LL Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.800 [WARNING][6300] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"857384f3-d2e7-446d-9732-a43f27f17e84", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985", Pod:"goldmane-5b85766d88-5rfzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc54e0ee47d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.801 [INFO][6300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.802 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" iface="eth0" netns="" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.802 [INFO][6300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.802 [INFO][6300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.919 [INFO][6310] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.920 [INFO][6310] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.920 [INFO][6310] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.933 [WARNING][6310] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.933 [INFO][6310] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.935 [INFO][6310] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:29.940752 containerd[2101]: 2026-04-24 23:55:29.938 [INFO][6300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:29.941968 containerd[2101]: time="2026-04-24T23:55:29.940792928Z" level=info msg="TearDown network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" successfully" Apr 24 23:55:29.941968 containerd[2101]: time="2026-04-24T23:55:29.940822894Z" level=info msg="StopPodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" returns successfully" Apr 24 23:55:29.941968 containerd[2101]: time="2026-04-24T23:55:29.941416348Z" level=info msg="RemovePodSandbox for \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" Apr 24 23:55:29.941968 containerd[2101]: time="2026-04-24T23:55:29.941455003Z" level=info msg="Forcibly stopping sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\"" Apr 24 23:55:29.975139 containerd[2101]: time="2026-04-24T23:55:29.975079789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:29.978765 containerd[2101]: time="2026-04-24T23:55:29.978693839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:55:29.981196 containerd[2101]: time="2026-04-24T23:55:29.981129206Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:29.986291 containerd[2101]: time="2026-04-24T23:55:29.986206486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:29.989449 containerd[2101]: time="2026-04-24T23:55:29.988654960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.731465531s" Apr 24 23:55:29.989449 containerd[2101]: time="2026-04-24T23:55:29.988705915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:55:29.990509 containerd[2101]: time="2026-04-24T23:55:29.990477303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:55:29.999034 containerd[2101]: time="2026-04-24T23:55:29.998521396Z" level=info msg="CreateContainer within sandbox \"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:55:30.041603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618652173.mount: Deactivated successfully. Apr 24 23:55:30.066979 containerd[2101]: time="2026-04-24T23:55:30.066694106Z" level=info msg="CreateContainer within sandbox \"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0948cc51a2db872175f4f133d97bdf062ce7d2a61035e87735402efcaeec5e0d\"" Apr 24 23:55:30.069899 containerd[2101]: time="2026-04-24T23:55:30.067877660Z" level=info msg="StartContainer for \"0948cc51a2db872175f4f133d97bdf062ce7d2a61035e87735402efcaeec5e0d\"" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.040 [WARNING][6336] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"857384f3-d2e7-446d-9732-a43f27f17e84", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"4f0055de97c59d3ae59d11be07d4b565edb8404405baf2af14e4449a04f75985", Pod:"goldmane-5b85766d88-5rfzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc54e0ee47d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.042 [INFO][6336] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.042 [INFO][6336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" iface="eth0" netns="" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.042 [INFO][6336] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.042 [INFO][6336] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.113 [INFO][6343] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.114 [INFO][6343] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.114 [INFO][6343] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.146 [WARNING][6343] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.149 [INFO][6343] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" HandleID="k8s-pod-network.05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Workload="ip--172--31--23--136-k8s-goldmane--5b85766d88--5rfzw-eth0" Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.153 [INFO][6343] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:55:30.163243 containerd[2101]: 2026-04-24 23:55:30.159 [INFO][6336] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599" Apr 24 23:55:30.164955 containerd[2101]: time="2026-04-24T23:55:30.164475437Z" level=info msg="TearDown network for sandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" successfully" Apr 24 23:55:30.171018 containerd[2101]: time="2026-04-24T23:55:30.170858157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:55:30.171018 containerd[2101]: time="2026-04-24T23:55:30.170974775Z" level=info msg="RemovePodSandbox \"05f415957a3c71d8ee5d1eba3e238c6c8363d217f7f35949d9899ef9ac0f2599\" returns successfully" Apr 24 23:55:30.238714 systemd[1]: run-containerd-runc-k8s.io-0948cc51a2db872175f4f133d97bdf062ce7d2a61035e87735402efcaeec5e0d-runc.ffR8tB.mount: Deactivated successfully. Apr 24 23:55:30.290874 containerd[2101]: time="2026-04-24T23:55:30.290808261Z" level=info msg="StartContainer for \"0948cc51a2db872175f4f133d97bdf062ce7d2a61035e87735402efcaeec5e0d\" returns successfully" Apr 24 23:55:30.589585 kubelet[3571]: I0424 23:55:30.589391 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-5rfzw" podStartSLOduration=37.856030392 podStartE2EDuration="43.589365553s" podCreationTimestamp="2026-04-24 23:54:47 +0000 UTC" firstStartedPulling="2026-04-24 23:55:24.256600311 +0000 UTC m=+57.724749151" lastFinishedPulling="2026-04-24 23:55:29.989935461 +0000 UTC m=+63.458084312" observedRunningTime="2026-04-24 23:55:30.566911792 +0000 UTC m=+64.035060646" watchObservedRunningTime="2026-04-24 23:55:30.589365553 +0000 UTC m=+64.057514415" Apr 24 23:55:32.595421 ntpd[2062]: Listen normally on 9 calidc54e0ee47d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:55:32.595513 ntpd[2062]: Listen normally on 10 cali6fc51567a67 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:55:32.596800 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 9 calidc54e0ee47d [fe80::ecee:eeff:feee:eeee%8]:123 Apr 24 23:55:32.596800 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 10 cali6fc51567a67 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 24 23:55:32.596800 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 11 cali87e5f7d556f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:55:32.595555 ntpd[2062]: Listen normally on 11 cali87e5f7d556f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 24 23:55:32.597417 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 12 calia02e669c41a [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:55:32.597417 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 13 calid6f0df2db66 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 24 23:55:32.597417 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 14 cali3b5907b1d2a [fe80::ecee:eeff:feee:eeee%13]:123 Apr 24 23:55:32.597417 ntpd[2062]: 24 Apr 23:55:32 ntpd[2062]: Listen normally on 15 calie4eaf59c038 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 24 23:55:32.595594 ntpd[2062]: Listen normally on 12 calia02e669c41a [fe80::ecee:eeff:feee:eeee%11]:123 Apr 24 23:55:32.597094 ntpd[2062]: Listen normally on 13 calid6f0df2db66 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 24 23:55:32.597163 ntpd[2062]: Listen normally on 14 cali3b5907b1d2a [fe80::ecee:eeff:feee:eeee%13]:123 Apr 24 23:55:32.597204 ntpd[2062]: Listen normally on 15 calie4eaf59c038 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 24 23:55:33.287823 systemd[1]: Started sshd@7-172.31.23.136:22-4.175.71.9:59800.service - OpenSSH per-connection server daemon (4.175.71.9:59800). Apr 24 23:55:33.998284 containerd[2101]: time="2026-04-24T23:55:33.998223164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:34.006355 containerd[2101]: time="2026-04-24T23:55:34.006253333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:55:34.009977 containerd[2101]: time="2026-04-24T23:55:34.009909946Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:34.015518 containerd[2101]: time="2026-04-24T23:55:34.015102084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:34.016351 containerd[2101]: time="2026-04-24T23:55:34.016297926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.025778105s" Apr 24 23:55:34.016351 containerd[2101]: time="2026-04-24T23:55:34.016334323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:55:34.079499 containerd[2101]: time="2026-04-24T23:55:34.079452008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:55:34.259136 containerd[2101]: time="2026-04-24T23:55:34.259007390Z" level=info msg="CreateContainer within sandbox \"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:55:34.275858 containerd[2101]: time="2026-04-24T23:55:34.275623721Z" level=info msg="CreateContainer within sandbox \"6fe111e94bc0f6b0b110b4ef7f89476074f4ce56c1b9e31e252fcd0e9b174f55\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1886524066b039c4d945ce6ef8e258ba82cfe560f7112a6cd4cf3039a192402d\"" Apr 24 23:55:34.277063 containerd[2101]: time="2026-04-24T23:55:34.276642289Z" level=info msg="StartContainer for \"1886524066b039c4d945ce6ef8e258ba82cfe560f7112a6cd4cf3039a192402d\"" Apr 24 23:55:34.394602 sshd[6432]: Accepted publickey for core from 4.175.71.9 port 59800 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:55:34.408818 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:55:34.458161 systemd-logind[2083]: New session 8 of user core. Apr 24 23:55:34.461971 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:55:34.555363 containerd[2101]: time="2026-04-24T23:55:34.554726876Z" level=info msg="StartContainer for \"1886524066b039c4d945ce6ef8e258ba82cfe560f7112a6cd4cf3039a192402d\" returns successfully" Apr 24 23:55:34.701864 kubelet[3571]: I0424 23:55:34.700861 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5597f658fb-6hcjb" podStartSLOduration=38.056236271 podStartE2EDuration="46.700606025s" podCreationTimestamp="2026-04-24 23:54:48 +0000 UTC" firstStartedPulling="2026-04-24 23:55:25.424061905 +0000 UTC m=+58.892210753" lastFinishedPulling="2026-04-24 23:55:34.068431636 +0000 UTC m=+67.536580507" observedRunningTime="2026-04-24 23:55:34.699499856 +0000 UTC m=+68.167648734" watchObservedRunningTime="2026-04-24 23:55:34.700606025 +0000 UTC m=+68.168754887" Apr 24 23:55:34.745630 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:34.742839 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:34.742878 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:35.987948 sshd[6432]: pam_unix(sshd:session): session closed for user core Apr 24 23:55:35.996967 systemd[1]: sshd@7-172.31.23.136:22-4.175.71.9:59800.service: Deactivated successfully. Apr 24 23:55:36.008658 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:55:36.009690 systemd-logind[2083]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:55:36.012227 systemd-logind[2083]: Removed session 8. Apr 24 23:55:36.797053 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:36.791356 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:36.791395 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:37.931193 containerd[2101]: time="2026-04-24T23:55:37.931131550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:37.957087 containerd[2101]: time="2026-04-24T23:55:37.933094290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:55:37.957087 containerd[2101]: time="2026-04-24T23:55:37.948661163Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:37.962233 containerd[2101]: time="2026-04-24T23:55:37.962180211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.882679197s" Apr 24 23:55:37.962233 containerd[2101]: time="2026-04-24T23:55:37.962231775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:55:37.963374 containerd[2101]: time="2026-04-24T23:55:37.962810200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:38.003821 containerd[2101]: time="2026-04-24T23:55:38.003585911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:55:38.047920 containerd[2101]: time="2026-04-24T23:55:38.047866785Z" level=info msg="CreateContainer within sandbox \"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:55:38.067627 containerd[2101]: time="2026-04-24T23:55:38.067579748Z" level=info msg="CreateContainer within sandbox \"15e31ec20c0ebe0f1afb86fbdcaeb38bd26f3ca9ca5c75c1318c5aa416cb1fe0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca66fd0de44efd87c6d0928c12c7397c16d1175fb816168fb50fcb5fa04f65eb\"" Apr 24 23:55:38.076902 containerd[2101]: time="2026-04-24T23:55:38.075374052Z" level=info msg="StartContainer for \"ca66fd0de44efd87c6d0928c12c7397c16d1175fb816168fb50fcb5fa04f65eb\"" Apr 24 23:55:38.221078 containerd[2101]: time="2026-04-24T23:55:38.220336962Z" level=info msg="StartContainer for \"ca66fd0de44efd87c6d0928c12c7397c16d1175fb816168fb50fcb5fa04f65eb\" returns successfully" Apr 24 23:55:38.323710 containerd[2101]: time="2026-04-24T23:55:38.323665650Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:38.328553 containerd[2101]: time="2026-04-24T23:55:38.328204599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:55:38.332849 containerd[2101]: time="2026-04-24T23:55:38.332780437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 329.144912ms" Apr 24 23:55:38.334514 containerd[2101]: time="2026-04-24T23:55:38.334383329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:55:38.339329 containerd[2101]: time="2026-04-24T23:55:38.339300540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:55:38.349000 containerd[2101]: time="2026-04-24T23:55:38.348964121Z" level=info msg="CreateContainer within sandbox \"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:55:38.373659 containerd[2101]: time="2026-04-24T23:55:38.373618695Z" level=info msg="CreateContainer within sandbox \"974bfb3377e4e8efd695c85eb1e6f0af214a11312c264aae12d085b0b507bdf2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bfc307ac7c597ebce891ee0b4b0b39951e912f80aedc3af66dad2919f28a4f30\"" Apr 24 23:55:38.376404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281982492.mount: Deactivated successfully. Apr 24 23:55:38.384205 containerd[2101]: time="2026-04-24T23:55:38.377396124Z" level=info msg="StartContainer for \"bfc307ac7c597ebce891ee0b4b0b39951e912f80aedc3af66dad2919f28a4f30\"" Apr 24 23:55:38.569885 containerd[2101]: time="2026-04-24T23:55:38.569158414Z" level=info msg="StartContainer for \"bfc307ac7c597ebce891ee0b4b0b39951e912f80aedc3af66dad2919f28a4f30\" returns successfully" Apr 24 23:55:38.843415 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:38.838628 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:38.838663 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:39.377111 kubelet[3571]: I0424 23:55:39.355721 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-65f886c557-mclkn" podStartSLOduration=39.81411459 podStartE2EDuration="52.292282951s" podCreationTimestamp="2026-04-24 23:54:47 +0000 UTC" firstStartedPulling="2026-04-24 23:55:25.511908492 +0000 UTC m=+58.980057334" lastFinishedPulling="2026-04-24 23:55:37.990076844 +0000 UTC m=+71.458225695" observedRunningTime="2026-04-24 23:55:39.114751273 +0000 UTC m=+72.582900136" watchObservedRunningTime="2026-04-24 23:55:39.292282951 +0000 UTC m=+72.760431807" Apr 24 23:55:39.378233 kubelet[3571]: I0424 23:55:39.377885 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-65f886c557-5hqq5" podStartSLOduration=40.566478958 podStartE2EDuration="52.377867299s" podCreationTimestamp="2026-04-24 23:54:47 +0000 UTC" firstStartedPulling="2026-04-24 23:55:26.523928356 +0000 UTC m=+59.992077199" lastFinishedPulling="2026-04-24 23:55:38.335316696 +0000 UTC m=+71.803465540" observedRunningTime="2026-04-24 23:55:39.151118413 +0000 UTC m=+72.619267276" watchObservedRunningTime="2026-04-24 23:55:39.377867299 +0000 UTC m=+72.846016162" Apr 24 23:55:40.470527 containerd[2101]: time="2026-04-24T23:55:40.469308744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:40.472795 containerd[2101]: time="2026-04-24T23:55:40.472745576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:55:40.476422 containerd[2101]: time="2026-04-24T23:55:40.476387086Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:40.479616 containerd[2101]: time="2026-04-24T23:55:40.479580138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:40.481293 containerd[2101]: time="2026-04-24T23:55:40.480970779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.141409833s" Apr 24 23:55:40.481293 containerd[2101]: time="2026-04-24T23:55:40.481012925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:55:40.565683 containerd[2101]: time="2026-04-24T23:55:40.565633310Z" level=info msg="CreateContainer within sandbox \"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:55:40.645903 containerd[2101]: time="2026-04-24T23:55:40.645546836Z" level=info msg="CreateContainer within sandbox \"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"729a5e0bf3092212e162529512a4230f7869c8edb3d4dea8bb1f790c66986f54\"" Apr 24 23:55:40.666452 containerd[2101]: time="2026-04-24T23:55:40.665529107Z" level=info msg="StartContainer for \"729a5e0bf3092212e162529512a4230f7869c8edb3d4dea8bb1f790c66986f54\"" Apr 24 23:55:40.822890 systemd[1]: run-containerd-runc-k8s.io-729a5e0bf3092212e162529512a4230f7869c8edb3d4dea8bb1f790c66986f54-runc.0qWFwx.mount: Deactivated successfully. Apr 24 23:55:40.892171 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:40.886781 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:40.891695 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:40.927256 containerd[2101]: time="2026-04-24T23:55:40.927186240Z" level=info msg="StartContainer for \"729a5e0bf3092212e162529512a4230f7869c8edb3d4dea8bb1f790c66986f54\" returns successfully" Apr 24 23:55:40.939656 containerd[2101]: time="2026-04-24T23:55:40.939595352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:55:41.147827 systemd[1]: Started sshd@8-172.31.23.136:22-4.175.71.9:57260.service - OpenSSH per-connection server daemon (4.175.71.9:57260). Apr 24 23:55:42.274465 sshd[6673]: Accepted publickey for core from 4.175.71.9 port 57260 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:55:42.280466 sshd[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:55:42.286946 systemd-logind[2083]: New session 9 of user core. Apr 24 23:55:42.291756 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:55:42.921839 containerd[2101]: time="2026-04-24T23:55:42.921741445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:42.924689 containerd[2101]: time="2026-04-24T23:55:42.924592463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:55:42.926690 containerd[2101]: time="2026-04-24T23:55:42.926433862Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:42.935796 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:42.939257 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:42.939372 containerd[2101]: time="2026-04-24T23:55:42.936318214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:55:42.935835 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:42.942481 containerd[2101]: time="2026-04-24T23:55:42.941463415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.001817288s" Apr 24 23:55:42.942481 containerd[2101]: time="2026-04-24T23:55:42.941508808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:55:42.986525 containerd[2101]: time="2026-04-24T23:55:42.986349591Z" level=info msg="CreateContainer within sandbox \"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:55:43.022263 containerd[2101]: time="2026-04-24T23:55:43.021303950Z" level=info msg="CreateContainer within sandbox \"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f668b9ce4c40b0e7be9e2f5ea7f0b9b42fa23f005638717291c7bf31a179861a\"" Apr 24 23:55:43.025374 containerd[2101]: time="2026-04-24T23:55:43.024442189Z" level=info msg="StartContainer for \"f668b9ce4c40b0e7be9e2f5ea7f0b9b42fa23f005638717291c7bf31a179861a\"" Apr 24 23:55:43.214979 containerd[2101]: time="2026-04-24T23:55:43.214853344Z" level=info msg="StartContainer for \"f668b9ce4c40b0e7be9e2f5ea7f0b9b42fa23f005638717291c7bf31a179861a\" returns successfully" Apr 24 23:55:44.195447 sshd[6673]: pam_unix(sshd:session): session closed for user core Apr 24 23:55:44.216881 systemd[1]: sshd@8-172.31.23.136:22-4.175.71.9:57260.service: Deactivated successfully. Apr 24 23:55:44.226988 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:55:44.230107 systemd-logind[2083]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:55:44.252480 systemd-logind[2083]: Removed session 9. Apr 24 23:55:44.545255 kubelet[3571]: I0424 23:55:44.543145 3571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-clgnl" podStartSLOduration=42.112298533 podStartE2EDuration="56.519849797s" podCreationTimestamp="2026-04-24 23:54:48 +0000 UTC" firstStartedPulling="2026-04-24 23:55:28.547643944 +0000 UTC m=+62.015792784" lastFinishedPulling="2026-04-24 23:55:42.955195195 +0000 UTC m=+76.423344048" observedRunningTime="2026-04-24 23:55:44.504687467 +0000 UTC m=+77.972836326" watchObservedRunningTime="2026-04-24 23:55:44.519849797 +0000 UTC m=+77.987998659" Apr 24 23:55:44.582572 kubelet[3571]: I0424 23:55:44.580790 3571 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:55:44.587298 kubelet[3571]: I0424 23:55:44.587248 3571 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:55:44.986317 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:44.986881 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:44.986899 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:49.387340 systemd[1]: Started sshd@9-172.31.23.136:22-4.175.71.9:35124.service - OpenSSH per-connection server daemon (4.175.71.9:35124). Apr 24 23:55:50.434699 sshd[6785]: Accepted publickey for core from 4.175.71.9 port 35124 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:55:50.438565 sshd[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:55:50.445176 systemd-logind[2083]: New session 10 of user core. Apr 24 23:55:50.449737 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:55:50.742674 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:50.742726 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:50.745313 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:51.781919 sshd[6785]: pam_unix(sshd:session): session closed for user core Apr 24 23:55:51.785957 systemd-logind[2083]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:55:51.786508 systemd[1]: sshd@9-172.31.23.136:22-4.175.71.9:35124.service: Deactivated successfully. Apr 24 23:55:51.798674 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:55:51.800388 systemd-logind[2083]: Removed session 10. Apr 24 23:55:54.780649 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:54.791592 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:54.791603 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:56.825599 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:55:56.823187 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:55:56.823197 systemd-resolved[1988]: Flushed all caches. Apr 24 23:55:56.951679 systemd[1]: Started sshd@10-172.31.23.136:22-4.175.71.9:42820.service - OpenSSH per-connection server daemon (4.175.71.9:42820). Apr 24 23:55:58.022944 sshd[6816]: Accepted publickey for core from 4.175.71.9 port 42820 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:55:58.027337 sshd[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:55:58.034829 systemd-logind[2083]: New session 11 of user core. Apr 24 23:55:58.038743 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:55:59.104793 sshd[6816]: pam_unix(sshd:session): session closed for user core Apr 24 23:55:59.113316 systemd-logind[2083]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:55:59.114184 systemd[1]: sshd@10-172.31.23.136:22-4.175.71.9:42820.service: Deactivated successfully. Apr 24 23:55:59.124625 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:55:59.128874 systemd-logind[2083]: Removed session 11. Apr 24 23:55:59.277599 systemd[1]: Started sshd@11-172.31.23.136:22-4.175.71.9:42822.service - OpenSSH per-connection server daemon (4.175.71.9:42822). Apr 24 23:56:00.310361 sshd[6840]: Accepted publickey for core from 4.175.71.9 port 42822 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:00.311433 sshd[6840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:00.316202 systemd-logind[2083]: New session 12 of user core. Apr 24 23:56:00.321840 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:56:01.696641 sshd[6840]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:01.717598 systemd[1]: sshd@11-172.31.23.136:22-4.175.71.9:42822.service: Deactivated successfully. Apr 24 23:56:01.726640 systemd-logind[2083]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:56:01.728264 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:56:01.735841 systemd-logind[2083]: Removed session 12. Apr 24 23:56:01.896784 systemd[1]: Started sshd@12-172.31.23.136:22-4.175.71.9:42838.service - OpenSSH per-connection server daemon (4.175.71.9:42838). Apr 24 23:56:02.779882 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:02.778857 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:02.779653 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:03.672349 sshd[6872]: Accepted publickey for core from 4.175.71.9 port 42838 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:03.675660 sshd[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:03.682539 systemd-logind[2083]: New session 13 of user core. Apr 24 23:56:03.688831 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:56:04.752523 sshd[6872]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:04.770884 systemd[1]: sshd@12-172.31.23.136:22-4.175.71.9:42838.service: Deactivated successfully. Apr 24 23:56:04.786359 systemd-logind[2083]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:56:04.786760 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:56:04.794665 systemd-logind[2083]: Removed session 13. Apr 24 23:56:09.923642 systemd[1]: Started sshd@13-172.31.23.136:22-4.175.71.9:50948.service - OpenSSH per-connection server daemon (4.175.71.9:50948). Apr 24 23:56:10.973537 sshd[6915]: Accepted publickey for core from 4.175.71.9 port 50948 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:10.976747 sshd[6915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:10.982707 systemd-logind[2083]: New session 14 of user core. Apr 24 23:56:10.985677 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:56:12.261540 sshd[6915]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:12.265169 systemd[1]: sshd@13-172.31.23.136:22-4.175.71.9:50948.service: Deactivated successfully. Apr 24 23:56:12.270807 systemd-logind[2083]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:56:12.271730 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:56:12.273540 systemd-logind[2083]: Removed session 14. Apr 24 23:56:12.437594 systemd[1]: Started sshd@14-172.31.23.136:22-4.175.71.9:50958.service - OpenSSH per-connection server daemon (4.175.71.9:50958). Apr 24 23:56:12.758589 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:12.760607 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:12.758618 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:13.456706 sshd[6929]: Accepted publickey for core from 4.175.71.9 port 50958 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:13.458527 sshd[6929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.464346 systemd-logind[2083]: New session 15 of user core. Apr 24 23:56:13.469730 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:56:14.184042 systemd[1]: run-containerd-runc-k8s.io-48a8d638be372cf651e7fb4f88da6e910113fc5d607a8b7c480cb9cc312a87bb-runc.bAPQNj.mount: Deactivated successfully. Apr 24 23:56:14.808022 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:14.808700 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:14.808033 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:15.173173 sshd[6929]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:15.188416 systemd[1]: sshd@14-172.31.23.136:22-4.175.71.9:50958.service: Deactivated successfully. Apr 24 23:56:15.198581 systemd-logind[2083]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:56:15.199899 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:56:15.201563 systemd-logind[2083]: Removed session 15. Apr 24 23:56:15.343602 systemd[1]: Started sshd@15-172.31.23.136:22-4.175.71.9:50962.service - OpenSSH per-connection server daemon (4.175.71.9:50962). Apr 24 23:56:16.405204 sshd[6962]: Accepted publickey for core from 4.175.71.9 port 50962 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:16.407489 sshd[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:16.413713 systemd-logind[2083]: New session 16 of user core. Apr 24 23:56:16.417600 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:56:18.012204 sshd[6962]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:18.021289 systemd[1]: sshd@15-172.31.23.136:22-4.175.71.9:50962.service: Deactivated successfully. Apr 24 23:56:18.029839 systemd-logind[2083]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:56:18.030527 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:56:18.038229 systemd-logind[2083]: Removed session 16. Apr 24 23:56:18.180613 systemd[1]: Started sshd@16-172.31.23.136:22-4.175.71.9:34104.service - OpenSSH per-connection server daemon (4.175.71.9:34104). Apr 24 23:56:19.179963 sshd[6994]: Accepted publickey for core from 4.175.71.9 port 34104 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:19.188608 sshd[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:19.211413 systemd-logind[2083]: New session 17 of user core. Apr 24 23:56:19.214787 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:56:20.758361 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:20.758392 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:20.761308 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:21.000051 sshd[6994]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:21.011890 systemd[1]: sshd@16-172.31.23.136:22-4.175.71.9:34104.service: Deactivated successfully. Apr 24 23:56:21.016932 systemd-logind[2083]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:56:21.017627 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:56:21.021192 systemd-logind[2083]: Removed session 17. Apr 24 23:56:21.176656 systemd[1]: Started sshd@17-172.31.23.136:22-4.175.71.9:34108.service - OpenSSH per-connection server daemon (4.175.71.9:34108). Apr 24 23:56:22.231305 sshd[7005]: Accepted publickey for core from 4.175.71.9 port 34108 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:22.234887 sshd[7005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:22.240714 systemd-logind[2083]: New session 18 of user core. Apr 24 23:56:22.247619 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:56:22.809476 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:22.806829 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:22.806838 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:23.125757 sshd[7005]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:23.129861 systemd[1]: sshd@17-172.31.23.136:22-4.175.71.9:34108.service: Deactivated successfully. Apr 24 23:56:23.135894 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:56:23.137571 systemd-logind[2083]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:56:23.138994 systemd-logind[2083]: Removed session 18. Apr 24 23:56:27.162974 systemd[1]: run-containerd-runc-k8s.io-1886524066b039c4d945ce6ef8e258ba82cfe560f7112a6cd4cf3039a192402d-runc.8vmhZi.mount: Deactivated successfully. Apr 24 23:56:28.289679 systemd[1]: Started sshd@18-172.31.23.136:22-4.175.71.9:57300.service - OpenSSH per-connection server daemon (4.175.71.9:57300). Apr 24 23:56:29.299034 sshd[7042]: Accepted publickey for core from 4.175.71.9 port 57300 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:29.302689 sshd[7042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:29.308983 systemd-logind[2083]: New session 19 of user core. Apr 24 23:56:29.312692 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:56:30.301595 containerd[2101]: time="2026-04-24T23:56:30.275678585Z" level=info msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" Apr 24 23:56:30.416564 sshd[7042]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:30.424015 systemd[1]: sshd@18-172.31.23.136:22-4.175.71.9:57300.service: Deactivated successfully. Apr 24 23:56:30.446259 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:56:30.447148 systemd-logind[2083]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:56:30.450987 systemd-logind[2083]: Removed session 19. Apr 24 23:56:30.806519 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:30.808842 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:30.806547 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:30.959 [WARNING][7064] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"527ec70e-1bb6-4d30-8070-55e2af7c2275", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003", Pod:"coredns-674b8bbfcf-4c8vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4eaf59c038", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:30.966 [INFO][7064] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:30.966 [INFO][7064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" iface="eth0" netns="" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:30.966 [INFO][7064] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:30.966 [INFO][7064] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.370 [INFO][7092] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.373 [INFO][7092] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.374 [INFO][7092] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.388 [WARNING][7092] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.388 [INFO][7092] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.390 [INFO][7092] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:31.395836 containerd[2101]: 2026-04-24 23:56:31.392 [INFO][7064] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.406380 containerd[2101]: time="2026-04-24T23:56:31.406305354Z" level=info msg="TearDown network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" successfully" Apr 24 23:56:31.406380 containerd[2101]: time="2026-04-24T23:56:31.406367017Z" level=info msg="StopPodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" returns successfully" Apr 24 23:56:31.452531 containerd[2101]: time="2026-04-24T23:56:31.452462383Z" level=info msg="RemovePodSandbox for \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" Apr 24 23:56:31.460823 containerd[2101]: time="2026-04-24T23:56:31.460768758Z" level=info msg="Forcibly stopping sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\"" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.561 [WARNING][7106] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"527ec70e-1bb6-4d30-8070-55e2af7c2275", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"fda000584b502d5f1a35674d740b6898542694c1cead0a2f33bce9bada27e003", Pod:"coredns-674b8bbfcf-4c8vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4eaf59c038", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.562 [INFO][7106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.562 [INFO][7106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" iface="eth0" netns="" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.562 [INFO][7106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.562 [INFO][7106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.588 [INFO][7113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.588 [INFO][7113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.588 [INFO][7113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.597 [WARNING][7113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.597 [INFO][7113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" HandleID="k8s-pod-network.4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--4c8vb-eth0" Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.598 [INFO][7113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:31.606220 containerd[2101]: 2026-04-24 23:56:31.602 [INFO][7106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8" Apr 24 23:56:31.606220 containerd[2101]: time="2026-04-24T23:56:31.606080100Z" level=info msg="TearDown network for sandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" successfully" Apr 24 23:56:31.692765 containerd[2101]: time="2026-04-24T23:56:31.692597564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:56:31.692765 containerd[2101]: time="2026-04-24T23:56:31.692722641Z" level=info msg="RemovePodSandbox \"4e86ec5266805d6c6950471e7d27140ed69e2878ae20a1102b82f8ff987060c8\" returns successfully" Apr 24 23:56:31.695683 containerd[2101]: time="2026-04-24T23:56:31.695645773Z" level=info msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.744 [WARNING][7127] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54f65b93-ac7d-4a34-935e-59195780993c", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759", Pod:"csi-node-driver-clgnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5907b1d2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.744 [INFO][7127] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.744 [INFO][7127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" iface="eth0" netns="" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.744 [INFO][7127] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.744 [INFO][7127] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.787 [INFO][7134] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.787 [INFO][7134] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.787 [INFO][7134] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.796 [WARNING][7134] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.796 [INFO][7134] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.798 [INFO][7134] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:31.804819 containerd[2101]: 2026-04-24 23:56:31.801 [INFO][7127] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.808044 containerd[2101]: time="2026-04-24T23:56:31.805912276Z" level=info msg="TearDown network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" successfully" Apr 24 23:56:31.808044 containerd[2101]: time="2026-04-24T23:56:31.805946797Z" level=info msg="StopPodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" returns successfully" Apr 24 23:56:31.808044 containerd[2101]: time="2026-04-24T23:56:31.806480729Z" level=info msg="RemovePodSandbox for \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" Apr 24 23:56:31.808044 containerd[2101]: time="2026-04-24T23:56:31.806512788Z" level=info msg="Forcibly stopping sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\"" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.930 [WARNING][7150] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54f65b93-ac7d-4a34-935e-59195780993c", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"1940bb92c4be577fde2b071fa0e06628ad2b84d94f1647c674a9ccc881bbd759", Pod:"csi-node-driver-clgnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5907b1d2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.931 [INFO][7150] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.931 [INFO][7150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" iface="eth0" netns="" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.931 [INFO][7150] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.931 [INFO][7150] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.964 [INFO][7158] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.964 [INFO][7158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.964 [INFO][7158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.974 [WARNING][7158] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.974 [INFO][7158] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" HandleID="k8s-pod-network.9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Workload="ip--172--31--23--136-k8s-csi--node--driver--clgnl-eth0" Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.976 [INFO][7158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:31.981028 containerd[2101]: 2026-04-24 23:56:31.978 [INFO][7150] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d" Apr 24 23:56:31.983572 containerd[2101]: time="2026-04-24T23:56:31.981021668Z" level=info msg="TearDown network for sandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" successfully" Apr 24 23:56:32.097804 containerd[2101]: time="2026-04-24T23:56:32.097564249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:56:32.097804 containerd[2101]: time="2026-04-24T23:56:32.097666926Z" level=info msg="RemovePodSandbox \"9b75aabb71f2695bfddc44fae2c196b954854c51eb47a42c12a46f3cc8f01d3d\" returns successfully" Apr 24 23:56:32.098883 containerd[2101]: time="2026-04-24T23:56:32.098851407Z" level=info msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.165 [WARNING][7172] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed3d5ba2-1e47-4166-82c1-9c12137f6661", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be", Pod:"coredns-674b8bbfcf-6dpzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6f0df2db66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.166 [INFO][7172] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.166 [INFO][7172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" iface="eth0" netns="" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.166 [INFO][7172] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.166 [INFO][7172] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.191 [INFO][7179] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.191 [INFO][7179] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.192 [INFO][7179] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.199 [WARNING][7179] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.199 [INFO][7179] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.201 [INFO][7179] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:32.206248 containerd[2101]: 2026-04-24 23:56:32.203 [INFO][7172] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.206248 containerd[2101]: time="2026-04-24T23:56:32.206080936Z" level=info msg="TearDown network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" successfully" Apr 24 23:56:32.206248 containerd[2101]: time="2026-04-24T23:56:32.206113369Z" level=info msg="StopPodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" returns successfully" Apr 24 23:56:32.207065 containerd[2101]: time="2026-04-24T23:56:32.206685277Z" level=info msg="RemovePodSandbox for \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" Apr 24 23:56:32.207065 containerd[2101]: time="2026-04-24T23:56:32.206723004Z" level=info msg="Forcibly stopping sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\"" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.250 [WARNING][7194] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed3d5ba2-1e47-4166-82c1-9c12137f6661", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-136", ContainerID:"29d54e7fe7963dd461fe4ca45dbb1df5499565fd6e687895a1a2f0337522c0be", Pod:"coredns-674b8bbfcf-6dpzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6f0df2db66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.250 [INFO][7194] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.250 [INFO][7194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" iface="eth0" netns="" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.250 [INFO][7194] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.250 [INFO][7194] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.276 [INFO][7202] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.276 [INFO][7202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.276 [INFO][7202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.283 [WARNING][7202] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.283 [INFO][7202] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" HandleID="k8s-pod-network.f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Workload="ip--172--31--23--136-k8s-coredns--674b8bbfcf--6dpzw-eth0" Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.284 [INFO][7202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:56:32.289855 containerd[2101]: 2026-04-24 23:56:32.287 [INFO][7194] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1" Apr 24 23:56:32.289855 containerd[2101]: time="2026-04-24T23:56:32.289780388Z" level=info msg="TearDown network for sandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" successfully" Apr 24 23:56:32.304237 containerd[2101]: time="2026-04-24T23:56:32.303992920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:56:32.304237 containerd[2101]: time="2026-04-24T23:56:32.304123936Z" level=info msg="RemovePodSandbox \"f8b9fa5279227c45ef14cbac137ee34c53c6bc3f663e2e5b89976254a73478f1\" returns successfully" Apr 24 23:56:35.590713 systemd[1]: Started sshd@19-172.31.23.136:22-4.175.71.9:47008.service - OpenSSH per-connection server daemon (4.175.71.9:47008). Apr 24 23:56:36.713619 sshd[7233]: Accepted publickey for core from 4.175.71.9 port 47008 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:36.718472 sshd[7233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:36.738343 systemd-logind[2083]: New session 20 of user core. Apr 24 23:56:36.743702 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:56:36.764401 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:36.763001 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:36.763008 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:38.598972 sshd[7233]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:38.603587 systemd-logind[2083]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:56:38.605058 systemd[1]: sshd@19-172.31.23.136:22-4.175.71.9:47008.service: Deactivated successfully. Apr 24 23:56:38.611513 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:56:38.612880 systemd-logind[2083]: Removed session 20. Apr 24 23:56:38.806542 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:38.808380 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:38.806550 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:43.780209 systemd[1]: Started sshd@20-172.31.23.136:22-4.175.71.9:47018.service - OpenSSH per-connection server daemon (4.175.71.9:47018). Apr 24 23:56:44.007939 update_engine[2084]: I20260424 23:56:44.007858 2084 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 24 23:56:44.007939 update_engine[2084]: I20260424 23:56:44.007941 2084 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 24 23:56:44.011286 update_engine[2084]: I20260424 23:56:44.011222 2084 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 24 23:56:44.013862 update_engine[2084]: I20260424 23:56:44.013820 2084 omaha_request_params.cc:62] Current group set to lts Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.015959 2084 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.015993 2084 update_attempter.cc:643] Scheduling an action processor start. Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.016022 2084 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.016079 2084 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.016170 2084 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.016181 2084 omaha_request_action.cc:272] Request: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: Apr 24 23:56:44.017610 update_engine[2084]: I20260424 23:56:44.016191 2084 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:56:44.038199 update_engine[2084]: I20260424 23:56:44.038074 2084 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:56:44.038716 update_engine[2084]: I20260424 23:56:44.038667 2084 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:56:44.045426 locksmithd[2125]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 24 23:56:44.050187 update_engine[2084]: E20260424 23:56:44.050120 2084 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:56:44.050447 update_engine[2084]: I20260424 23:56:44.050249 2084 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 24 23:56:44.761795 systemd-journald[1585]: Under memory pressure, flushing caches. Apr 24 23:56:44.762012 kernel: hrtimer: interrupt took 690025 ns Apr 24 23:56:44.758639 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 24 23:56:44.758716 systemd-resolved[1988]: Flushed all caches. Apr 24 23:56:44.885434 sshd[7279]: Accepted publickey for core from 4.175.71.9 port 47018 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:56:44.890151 sshd[7279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:44.897053 systemd-logind[2083]: New session 21 of user core. Apr 24 23:56:44.904460 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:56:46.623137 sshd[7279]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:46.627755 systemd[1]: sshd@20-172.31.23.136:22-4.175.71.9:47018.service: Deactivated successfully. Apr 24 23:56:46.633984 systemd-logind[2083]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:56:46.635378 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:56:46.638877 systemd-logind[2083]: Removed session 21. Apr 24 23:56:53.929494 update_engine[2084]: I20260424 23:56:53.929407 2084 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:56:53.930077 update_engine[2084]: I20260424 23:56:53.929790 2084 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:56:53.930158 update_engine[2084]: I20260424 23:56:53.930075 2084 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:56:53.932599 update_engine[2084]: E20260424 23:56:53.932546 2084 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:56:53.932707 update_engine[2084]: I20260424 23:56:53.932633 2084 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 24 23:57:02.502739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af-rootfs.mount: Deactivated successfully. Apr 24 23:57:02.526050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae-rootfs.mount: Deactivated successfully. Apr 24 23:57:02.643634 containerd[2101]: time="2026-04-24T23:57:02.596318471Z" level=info msg="shim disconnected" id=f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af namespace=k8s.io Apr 24 23:57:02.647318 containerd[2101]: time="2026-04-24T23:57:02.596409773Z" level=info msg="shim disconnected" id=c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae namespace=k8s.io Apr 24 23:57:02.647318 containerd[2101]: time="2026-04-24T23:57:02.643888775Z" level=warning msg="cleaning up after shim disconnected" id=c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae namespace=k8s.io Apr 24 23:57:02.647318 containerd[2101]: time="2026-04-24T23:57:02.643911762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:02.647318 containerd[2101]: time="2026-04-24T23:57:02.643771679Z" level=warning msg="cleaning up after shim disconnected" id=f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af namespace=k8s.io Apr 24 23:57:02.647318 containerd[2101]: time="2026-04-24T23:57:02.643956025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:03.263201 kubelet[3571]: I0424 23:57:03.258246 3571 scope.go:117] "RemoveContainer" containerID="f781c1fb22417d9309e51039040e45e3b8eadc24e19fb23d1526a2a6d69223af" Apr 24 23:57:03.267562 kubelet[3571]: I0424 23:57:03.263572 3571 scope.go:117] "RemoveContainer" containerID="c1395480211660890b140570112d2ba8f6b1ae3e9022e314c8da3ad82aa46dae" Apr 24 23:57:03.317494 containerd[2101]: time="2026-04-24T23:57:03.317231931Z" level=info msg="CreateContainer within sandbox \"1e2df3d1a1a1b113974ffcd37f662f04045c79a74fbe0a514e9acc3250944ccb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 24 23:57:03.320056 containerd[2101]: time="2026-04-24T23:57:03.318951279Z" level=info msg="CreateContainer within sandbox \"0e068ced86d4063022b8f9411084f7b1c5a2ea5c97549c6363cbbb89e9bfc290\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 24 23:57:03.438610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33907298.mount: Deactivated successfully. Apr 24 23:57:03.455648 containerd[2101]: time="2026-04-24T23:57:03.455593898Z" level=info msg="CreateContainer within sandbox \"0e068ced86d4063022b8f9411084f7b1c5a2ea5c97549c6363cbbb89e9bfc290\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb1f3592c4a51344897eaea1f8a75a21ccb0e22bf38350835377461e5b914db4\"" Apr 24 23:57:03.460015 containerd[2101]: time="2026-04-24T23:57:03.459964856Z" level=info msg="CreateContainer within sandbox \"1e2df3d1a1a1b113974ffcd37f662f04045c79a74fbe0a514e9acc3250944ccb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b1a1c71c096b8ac91c97f03580d273248c0f891985f3fd1cf1ce3ed1f31a09bd\"" Apr 24 23:57:03.460230 containerd[2101]: time="2026-04-24T23:57:03.460206470Z" level=info msg="StartContainer for \"cb1f3592c4a51344897eaea1f8a75a21ccb0e22bf38350835377461e5b914db4\"" Apr 24 23:57:03.462248 containerd[2101]: time="2026-04-24T23:57:03.462207751Z" level=info msg="StartContainer for \"b1a1c71c096b8ac91c97f03580d273248c0f891985f3fd1cf1ce3ed1f31a09bd\"" Apr 24 23:57:03.641525 containerd[2101]: time="2026-04-24T23:57:03.641396672Z" level=info msg="StartContainer for \"b1a1c71c096b8ac91c97f03580d273248c0f891985f3fd1cf1ce3ed1f31a09bd\" returns successfully" Apr 24 23:57:03.650221 containerd[2101]: time="2026-04-24T23:57:03.650066729Z" level=info msg="StartContainer for \"cb1f3592c4a51344897eaea1f8a75a21ccb0e22bf38350835377461e5b914db4\" returns successfully" Apr 24 23:57:03.934975 update_engine[2084]: I20260424 23:57:03.934358 2084 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:57:03.934975 update_engine[2084]: I20260424 23:57:03.934638 2084 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:57:03.934975 update_engine[2084]: I20260424 23:57:03.934835 2084 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:57:03.936319 update_engine[2084]: E20260424 23:57:03.935760 2084 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:57:03.936319 update_engine[2084]: I20260424 23:57:03.935814 2084 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 24 23:57:06.258186 containerd[2101]: time="2026-04-24T23:57:06.257828754Z" level=info msg="shim disconnected" id=d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16 namespace=k8s.io Apr 24 23:57:06.258186 containerd[2101]: time="2026-04-24T23:57:06.257897179Z" level=warning msg="cleaning up after shim disconnected" id=d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16 namespace=k8s.io Apr 24 23:57:06.258186 containerd[2101]: time="2026-04-24T23:57:06.257909945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:06.261136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16-rootfs.mount: Deactivated successfully. Apr 24 23:57:07.236865 kubelet[3571]: I0424 23:57:07.236826 3571 scope.go:117] "RemoveContainer" containerID="d2b41e44aa8656eca56efd0b6a15945cb9bf0c758b071bba6d3e5e3d150d0a16" Apr 24 23:57:07.239983 containerd[2101]: time="2026-04-24T23:57:07.239945486Z" level=info msg="CreateContainer within sandbox \"9139ccc0734a14b22ef506a838f01d3118f1ecc03da83eef0682b153bf777dcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 24 23:57:07.281812 containerd[2101]: time="2026-04-24T23:57:07.281754896Z" level=info msg="CreateContainer within sandbox \"9139ccc0734a14b22ef506a838f01d3118f1ecc03da83eef0682b153bf777dcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dfe1a9d268b508771ba1507e15c277fc0a7dea665f17e113b6c5886dccc47136\"" Apr 24 23:57:07.282474 containerd[2101]: time="2026-04-24T23:57:07.282390697Z" level=info msg="StartContainer for \"dfe1a9d268b508771ba1507e15c277fc0a7dea665f17e113b6c5886dccc47136\"" Apr 24 23:57:07.329969 systemd[1]: run-containerd-runc-k8s.io-dfe1a9d268b508771ba1507e15c277fc0a7dea665f17e113b6c5886dccc47136-runc.Gx1l6l.mount: Deactivated successfully. Apr 24 23:57:07.383504 containerd[2101]: time="2026-04-24T23:57:07.383472730Z" level=info msg="StartContainer for \"dfe1a9d268b508771ba1507e15c277fc0a7dea665f17e113b6c5886dccc47136\" returns successfully" Apr 24 23:57:09.394917 kubelet[3571]: E0424 23:57:09.394837 3571 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-136)" Apr 24 23:57:13.934502 update_engine[2084]: I20260424 23:57:13.934427 2084 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:57:13.936179 update_engine[2084]: I20260424 23:57:13.934728 2084 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:57:13.936179 update_engine[2084]: I20260424 23:57:13.934991 2084 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:57:13.936179 update_engine[2084]: E20260424 23:57:13.935688 2084 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:57:13.936179 update_engine[2084]: I20260424 23:57:13.935734 2084 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 24 23:57:13.936179 update_engine[2084]: I20260424 23:57:13.935745 2084 omaha_request_action.cc:617] Omaha request response: Apr 24 23:57:13.937970 update_engine[2084]: E20260424 23:57:13.936371 2084 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936424 2084 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936431 2084 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936437 2084 update_attempter.cc:306] Processing Done. Apr 24 23:57:13.937970 update_engine[2084]: E20260424 23:57:13.936453 2084 update_attempter.cc:619] Update failed. Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936460 2084 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936465 2084 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936470 2084 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936547 2084 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936578 2084 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936586 2084 omaha_request_action.cc:272] Request: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: Apr 24 23:57:13.937970 update_engine[2084]: I20260424 23:57:13.936595 2084 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:57:13.940029 locksmithd[2125]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.936765 2084 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.936915 2084 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:57:13.940423 update_engine[2084]: E20260424 23:57:13.937517 2084 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937557 2084 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937564 2084 omaha_request_action.cc:617] Omaha request response: Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937574 2084 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937579 2084 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937584 2084 update_attempter.cc:306] Processing Done. Apr 24 23:57:13.940423 update_engine[2084]: I20260424 23:57:13.937590 2084 update_attempter.cc:310] Error event sent. Apr 24 23:57:13.943290 update_engine[2084]: I20260424 23:57:13.943130 2084 update_check_scheduler.cc:74] Next update check in 49m19s Apr 24 23:57:13.943653 locksmithd[2125]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0