Apr 17 23:39:56.947275 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:39:56.947318 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:56.947369 kernel: BIOS-provided physical RAM map: Apr 17 23:39:56.947382 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:39:56.947394 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:39:56.947407 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:39:56.947423 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:39:56.947437 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:39:56.947451 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:39:56.947469 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:39:56.947482 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:39:56.947496 kernel: NX (Execute Disable) protection: active Apr 17 23:39:56.947508 kernel: APIC: Static calls initialized Apr 17 23:39:56.947520 kernel: efi: EFI v2.7 by EDK II Apr 17 23:39:56.947536 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:39:56.947553 kernel: SMBIOS 2.7 present. Apr 17 23:39:56.947567 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:39:56.947581 kernel: Hypervisor detected: KVM Apr 17 23:39:56.947594 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:39:56.947606 kernel: kvm-clock: using sched offset of 4093726595 cycles Apr 17 23:39:56.947621 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:39:56.947634 kernel: tsc: Detected 2499.998 MHz processor Apr 17 23:39:56.947647 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:39:56.947661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:39:56.947675 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:39:56.947693 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:39:56.947708 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:39:56.947722 kernel: Using GB pages for direct mapping Apr 17 23:39:56.947735 kernel: Secure boot disabled Apr 17 23:39:56.947749 kernel: ACPI: Early table checksum verification disabled Apr 17 23:39:56.947764 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:39:56.947780 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:39:56.947795 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:39:56.947811 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:39:56.947830 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:39:56.947845 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:39:56.947858 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:39:56.947871 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:39:56.947884 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:39:56.947897 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:39:56.947915 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:39:56.947931 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:39:56.947945 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:39:56.947958 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:39:56.947973 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:39:56.947986 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:39:56.948000 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:39:56.948016 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:39:56.948030 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:39:56.948043 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:39:56.948057 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:39:56.948071 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:39:56.948085 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:39:56.948098 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:39:56.948111 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:39:56.948125 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:39:56.948139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:39:56.948155 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:39:56.948169 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:39:56.948182 kernel: Zone ranges: Apr 17 23:39:56.948196 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:39:56.948210 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:39:56.948224 kernel: Normal empty Apr 17 23:39:56.948238 kernel: Movable zone start for each node Apr 17 23:39:56.948252 kernel: Early memory node ranges Apr 17 23:39:56.948266 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:39:56.948283 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:39:56.948297 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:39:56.948312 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:39:56.948340 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:39:56.948360 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:39:56.948373 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:39:56.948386 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:39:56.948398 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:39:56.948411 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:39:56.948429 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:39:56.948443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:39:56.948457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:39:56.948472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:39:56.948486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:39:56.948500 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:39:56.948514 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:39:56.948529 kernel: TSC deadline timer available Apr 17 23:39:56.948543 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:39:56.948559 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:39:56.948572 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:39:56.948586 kernel: Booting paravirtualized kernel on KVM Apr 17 23:39:56.948601 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:39:56.948613 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:39:56.948626 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:39:56.948640 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:39:56.948654 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:39:56.948667 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:39:56.948682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:39:56.948714 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:56.948732 kernel: random: crng init done Apr 17 23:39:56.948745 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:39:56.948758 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:39:56.948769 kernel: Fallback order for Node 0: 0 Apr 17 23:39:56.948784 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:39:56.948798 kernel: Policy zone: DMA32 Apr 17 23:39:56.948812 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:39:56.948833 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:39:56.948845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:39:56.948858 kernel: Kernel/User page tables isolation: enabled Apr 17 23:39:56.948872 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:39:56.948885 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:39:56.948899 kernel: Dynamic Preempt: voluntary Apr 17 23:39:56.948915 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:39:56.948931 kernel: rcu: RCU event tracing is enabled. Apr 17 23:39:56.948950 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:39:56.948966 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:39:56.948981 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:39:56.948996 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:39:56.949012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:39:56.949027 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:39:56.949041 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:39:56.949057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:39:56.949087 kernel: Console: colour dummy device 80x25 Apr 17 23:39:56.949101 kernel: printk: console [tty0] enabled Apr 17 23:39:56.949116 kernel: printk: console [ttyS0] enabled Apr 17 23:39:56.949131 kernel: ACPI: Core revision 20230628 Apr 17 23:39:56.949146 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:39:56.949165 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:39:56.949180 kernel: x2apic enabled Apr 17 23:39:56.949195 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:39:56.949210 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:39:56.949229 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 17 23:39:56.949245 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:39:56.949260 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:39:56.949275 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:39:56.949290 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:39:56.949305 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:39:56.949320 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:39:56.949481 kernel: RETBleed: Vulnerable Apr 17 23:39:56.949497 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:39:56.949512 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:39:56.949527 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:39:56.949546 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:39:56.949561 kernel: active return thunk: its_return_thunk Apr 17 23:39:56.949577 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:39:56.949591 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:39:56.949607 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:39:56.949622 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:39:56.949637 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:39:56.949653 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:39:56.949668 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:39:56.949683 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:39:56.949702 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:39:56.949717 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:39:56.949732 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:39:56.949748 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:39:56.949762 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:39:56.949778 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:39:56.949793 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:39:56.949808 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:39:56.949823 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:39:56.949838 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:39:56.949853 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:39:56.949869 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:39:56.949887 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:39:56.949902 kernel: landlock: Up and running. Apr 17 23:39:56.949917 kernel: SELinux: Initializing. Apr 17 23:39:56.949932 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:39:56.949947 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:39:56.949961 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:39:56.949976 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:39:56.949991 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:39:56.950013 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:39:56.950031 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:39:56.950049 kernel: signal: max sigframe size: 3632 Apr 17 23:39:56.950065 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:39:56.950081 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:39:56.950098 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:39:56.950113 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:39:56.950126 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:39:56.950140 kernel: .... node #0, CPUs: #1 Apr 17 23:39:56.950155 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:39:56.950170 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:39:56.950188 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:39:56.950202 kernel: smpboot: Max logical packages: 1 Apr 17 23:39:56.950216 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 17 23:39:56.950230 kernel: devtmpfs: initialized Apr 17 23:39:56.950263 kernel: x86/mm: Memory block size: 128MB Apr 17 23:39:56.950277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:39:56.950292 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:39:56.950306 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:39:56.950320 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:39:56.950351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:39:56.950365 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:39:56.950387 kernel: audit: type=2000 audit(1776469197.378:1): state=initialized audit_enabled=0 res=1 Apr 17 23:39:56.950402 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:39:56.950416 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:39:56.950431 kernel: cpuidle: using governor menu Apr 17 23:39:56.950446 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:39:56.950460 kernel: dca service started, version 1.12.1 Apr 17 23:39:56.950475 kernel: PCI: Using configuration type 1 for base access Apr 17 23:39:56.950493 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:39:56.950508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:39:56.950523 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:39:56.950537 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:39:56.950553 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:39:56.950568 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:39:56.950583 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:39:56.950598 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:39:56.950612 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:39:56.950630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:39:56.950646 kernel: ACPI: Interpreter enabled Apr 17 23:39:56.950660 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:39:56.950676 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:39:56.950691 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:39:56.950707 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:39:56.950722 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:39:56.950738 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:39:56.950959 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:39:56.951119 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:39:56.951259 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:39:56.951279 kernel: acpiphp: Slot [3] registered Apr 17 23:39:56.951294 kernel: acpiphp: Slot [4] registered Apr 17 23:39:56.951308 kernel: acpiphp: Slot [5] registered Apr 17 23:39:56.951323 kernel: acpiphp: Slot [6] registered Apr 17 23:39:56.952392 kernel: acpiphp: Slot [7] registered Apr 17 23:39:56.952417 kernel: acpiphp: Slot [8] registered Apr 17 23:39:56.952435 kernel: acpiphp: Slot [9] registered Apr 17 23:39:56.952451 kernel: acpiphp: Slot [10] registered Apr 17 23:39:56.952467 kernel: acpiphp: Slot [11] registered Apr 17 23:39:56.952484 kernel: acpiphp: Slot [12] registered Apr 17 23:39:56.952500 kernel: acpiphp: Slot [13] registered Apr 17 23:39:56.952517 kernel: acpiphp: Slot [14] registered Apr 17 23:39:56.952534 kernel: acpiphp: Slot [15] registered Apr 17 23:39:56.952550 kernel: acpiphp: Slot [16] registered Apr 17 23:39:56.952569 kernel: acpiphp: Slot [17] registered Apr 17 23:39:56.952586 kernel: acpiphp: Slot [18] registered Apr 17 23:39:56.952602 kernel: acpiphp: Slot [19] registered Apr 17 23:39:56.952619 kernel: acpiphp: Slot [20] registered Apr 17 23:39:56.952635 kernel: acpiphp: Slot [21] registered Apr 17 23:39:56.952651 kernel: acpiphp: Slot [22] registered Apr 17 23:39:56.952667 kernel: acpiphp: Slot [23] registered Apr 17 23:39:56.952684 kernel: acpiphp: Slot [24] registered Apr 17 23:39:56.952700 kernel: acpiphp: Slot [25] registered Apr 17 23:39:56.952717 kernel: acpiphp: Slot [26] registered Apr 17 23:39:56.952735 kernel: acpiphp: Slot [27] registered Apr 17 23:39:56.952751 kernel: acpiphp: Slot [28] registered Apr 17 23:39:56.952768 kernel: acpiphp: Slot [29] registered Apr 17 23:39:56.952784 kernel: acpiphp: Slot [30] registered Apr 17 23:39:56.952800 kernel: acpiphp: Slot [31] registered Apr 17 23:39:56.952817 kernel: PCI host bridge to bus 0000:00 Apr 17 23:39:56.952992 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:39:56.953123 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:39:56.953253 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:39:56.954469 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:39:56.954630 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:39:56.954757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:39:56.954917 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:39:56.955075 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:39:56.955241 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:39:56.956513 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:39:56.956667 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:39:56.956810 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:39:56.956950 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:39:56.957093 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:39:56.957233 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:39:56.961164 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:39:56.961377 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:39:56.961531 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:39:56.961676 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:39:56.961824 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:39:56.961969 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:39:56.962121 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:39:56.962275 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:39:56.962453 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:39:56.962598 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:39:56.962621 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:39:56.962640 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:39:56.962657 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:39:56.962674 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:39:56.962691 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:39:56.962712 kernel: iommu: Default domain type: Translated Apr 17 23:39:56.962729 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:39:56.962746 kernel: efivars: Registered efivars operations Apr 17 23:39:56.962763 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:39:56.962780 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:39:56.962797 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:39:56.962814 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:39:56.962958 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:39:56.963131 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:39:56.963295 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:39:56.963320 kernel: vgaarb: loaded Apr 17 23:39:56.963397 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:39:56.963417 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:39:56.963436 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:39:56.963455 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:39:56.963474 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:39:56.963492 kernel: pnp: PnP ACPI init Apr 17 23:39:56.963517 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:39:56.963536 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:39:56.963556 kernel: NET: Registered PF_INET protocol family Apr 17 23:39:56.963575 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:39:56.963594 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:39:56.963613 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:39:56.963633 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:39:56.963652 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:39:56.963671 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:39:56.963694 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:39:56.963713 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:39:56.963732 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:39:56.963750 kernel: NET: Registered PF_XDP protocol family Apr 17 23:39:56.963905 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:39:56.964050 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:39:56.964196 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:39:56.964313 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:39:56.964461 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:39:56.964611 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:39:56.964632 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:39:56.964650 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:39:56.964667 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:39:56.964684 kernel: clocksource: Switched to clocksource tsc Apr 17 23:39:56.964702 kernel: Initialise system trusted keyrings Apr 17 23:39:56.964718 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:39:56.964735 kernel: Key type asymmetric registered Apr 17 23:39:56.964755 kernel: Asymmetric key parser 'x509' registered Apr 17 23:39:56.964770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:39:56.964787 kernel: io scheduler mq-deadline registered Apr 17 23:39:56.964803 kernel: io scheduler kyber registered Apr 17 23:39:56.964820 kernel: io scheduler bfq registered Apr 17 23:39:56.964836 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:39:56.964853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:39:56.964869 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:39:56.964886 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:39:56.964906 kernel: i8042: Warning: Keylock active Apr 17 23:39:56.964922 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:39:56.964939 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:39:56.965099 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:39:56.965233 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:39:56.967475 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:39:56 UTC (1776469196) Apr 17 23:39:56.967632 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:39:56.967655 kernel: intel_pstate: CPU model not supported Apr 17 23:39:56.967679 kernel: efifb: probing for efifb Apr 17 23:39:56.967696 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:39:56.967713 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:39:56.967730 kernel: efifb: scrolling: redraw Apr 17 23:39:56.967747 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:39:56.967764 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:39:56.967782 kernel: fb0: EFI VGA frame buffer device Apr 17 23:39:56.967799 kernel: pstore: Using crash dump compression: deflate Apr 17 23:39:56.967816 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:39:56.967835 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:39:56.967852 kernel: Segment Routing with IPv6 Apr 17 23:39:56.967870 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:39:56.967887 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:39:56.967903 kernel: Key type dns_resolver registered Apr 17 23:39:56.967920 kernel: IPI shorthand broadcast: enabled Apr 17 23:39:56.967964 kernel: sched_clock: Marking stable (492003127, 130093884)->(691334263, -69237252) Apr 17 23:39:56.967986 kernel: registered taskstats version 1 Apr 17 23:39:56.968004 kernel: Loading compiled-in X.509 certificates Apr 17 23:39:56.968024 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:39:56.968042 kernel: Key type .fscrypt registered Apr 17 23:39:56.968059 kernel: Key type fscrypt-provisioning registered Apr 17 23:39:56.968080 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:39:56.968098 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:39:56.968115 kernel: ima: No architecture policies found Apr 17 23:39:56.968133 kernel: clk: Disabling unused clocks Apr 17 23:39:56.968150 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:39:56.968168 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:39:56.968189 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:39:56.968207 kernel: Run /init as init process Apr 17 23:39:56.968224 kernel: with arguments: Apr 17 23:39:56.968241 kernel: /init Apr 17 23:39:56.968258 kernel: with environment: Apr 17 23:39:56.968275 kernel: HOME=/ Apr 17 23:39:56.968292 kernel: TERM=linux Apr 17 23:39:56.968313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:39:56.968351 systemd[1]: Detected virtualization amazon. Apr 17 23:39:56.968367 systemd[1]: Detected architecture x86-64. Apr 17 23:39:56.968383 systemd[1]: Running in initrd. Apr 17 23:39:56.968400 systemd[1]: No hostname configured, using default hostname. Apr 17 23:39:56.968415 systemd[1]: Hostname set to . Apr 17 23:39:56.968431 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:39:56.968447 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:39:56.968464 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:39:56.968484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:39:56.968501 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:39:56.968518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:39:56.968534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:39:56.968554 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:39:56.968576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:39:56.968591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:39:56.968608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:39:56.968625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:39:56.968643 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:39:56.968661 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:39:56.968680 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:39:56.968699 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:39:56.968717 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:39:56.968735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:39:56.968753 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:39:56.968772 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:39:56.968789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:39:56.968805 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:39:56.968820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:39:56.968835 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:39:56.968855 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:39:56.968873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:39:56.968892 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:39:56.968910 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:39:56.968928 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:39:56.968944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:39:56.968958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:56.968973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:39:56.969025 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:39:56.969067 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:39:56.969086 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:39:56.969108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:39:56.969128 systemd-journald[179]: Journal started Apr 17 23:39:56.969166 systemd-journald[179]: Runtime Journal (/run/log/journal/ec26eb21c1e6027d498c10261c570b51) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:39:56.943115 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:39:56.983354 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:39:56.984152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:56.994593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:57.004472 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:39:57.004509 kernel: Bridge firewalling registered Apr 17 23:39:57.002926 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:39:57.008432 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 17 23:39:57.008610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:39:57.009632 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:39:57.012949 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:39:57.022410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:39:57.026952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:39:57.029560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:39:57.043156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:57.045818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:39:57.053596 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:39:57.058538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:39:57.060413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:39:57.068820 dracut-cmdline[213]: dracut-dracut-053 Apr 17 23:39:57.072976 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:39:57.115765 systemd-resolved[214]: Positive Trust Anchors: Apr 17 23:39:57.115786 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:39:57.115846 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:39:57.124160 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 17 23:39:57.127481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:39:57.128189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:39:57.163372 kernel: SCSI subsystem initialized Apr 17 23:39:57.174361 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:39:57.185432 kernel: iscsi: registered transport (tcp) Apr 17 23:39:57.206443 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:39:57.206525 kernel: QLogic iSCSI HBA Driver Apr 17 23:39:57.245230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:39:57.250504 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:39:57.276501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:39:57.276579 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:39:57.279358 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:39:57.320374 kernel: raid6: avx512x4 gen() 17960 MB/s Apr 17 23:39:57.338353 kernel: raid6: avx512x2 gen() 17865 MB/s Apr 17 23:39:57.356356 kernel: raid6: avx512x1 gen() 17859 MB/s Apr 17 23:39:57.374350 kernel: raid6: avx2x4 gen() 17806 MB/s Apr 17 23:39:57.392352 kernel: raid6: avx2x2 gen() 17759 MB/s Apr 17 23:39:57.410560 kernel: raid6: avx2x1 gen() 13763 MB/s Apr 17 23:39:57.410606 kernel: raid6: using algorithm avx512x4 gen() 17960 MB/s Apr 17 23:39:57.429558 kernel: raid6: .... xor() 7724 MB/s, rmw enabled Apr 17 23:39:57.429611 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:39:57.451367 kernel: xor: automatically using best checksumming function avx Apr 17 23:39:57.610365 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:39:57.621184 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:39:57.626522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:39:57.645055 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 17 23:39:57.650282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:39:57.658224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:39:57.677238 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 17 23:39:57.707453 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:39:57.712536 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:39:57.765440 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:39:57.772186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:39:57.801693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:39:57.804595 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:39:57.806229 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:39:57.806905 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:39:57.813549 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:39:57.841992 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:39:57.869890 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:39:57.886285 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:39:57.886598 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:39:57.889500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:39:57.889670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:57.899076 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:39:57.892562 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:57.893150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:57.893427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:57.894052 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:57.906664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:57.914209 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:4c:c8:ee:48:57 Apr 17 23:39:57.914511 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:39:57.914535 kernel: AES CTR mode by8 optimization enabled Apr 17 23:39:57.929415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:39:57.930387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:57.936787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:39:57.939145 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:39:57.946529 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:39:57.950932 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:39:57.964864 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:39:57.972920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:39:57.979997 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:39:57.980039 kernel: GPT:9289727 != 33554431 Apr 17 23:39:57.980054 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:39:57.980066 kernel: GPT:9289727 != 33554431 Apr 17 23:39:57.980077 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:39:57.980088 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:39:57.985665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:39:58.003645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:39:58.072617 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (446) Apr 17 23:39:58.103302 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:39:58.129395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:39:58.135348 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (450) Apr 17 23:39:58.159506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:39:58.161494 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:39:58.170582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:39:58.179537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:39:58.181934 disk-uuid[629]: Primary Header is updated. Apr 17 23:39:58.181934 disk-uuid[629]: Secondary Entries is updated. Apr 17 23:39:58.181934 disk-uuid[629]: Secondary Header is updated. Apr 17 23:39:58.188435 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:39:58.196356 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:39:58.202354 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:39:59.207505 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:39:59.207589 disk-uuid[630]: The operation has completed successfully. Apr 17 23:39:59.358752 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:39:59.358896 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:39:59.375574 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:39:59.381583 sh[973]: Success Apr 17 23:39:59.397463 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:39:59.502274 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:39:59.516506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:39:59.520116 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:39:59.559532 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:39:59.559609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:59.561840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:39:59.565733 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:39:59.565798 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:39:59.645440 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:39:59.676797 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:39:59.680722 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:39:59.695861 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:39:59.719917 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:39:59.825567 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:59.825654 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:39:59.825679 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:39:59.843905 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:39:59.873158 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:39:59.874074 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:39:59.898693 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:39:59.910574 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:40:00.011207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:40:00.020817 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:40:00.185518 systemd-networkd[1166]: lo: Link UP Apr 17 23:40:00.185530 systemd-networkd[1166]: lo: Gained carrier Apr 17 23:40:00.187485 systemd-networkd[1166]: Enumeration completed Apr 17 23:40:00.199086 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:40:00.199096 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:40:00.214451 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:40:00.239732 systemd[1]: Reached target network.target - Network. Apr 17 23:40:00.277629 systemd-networkd[1166]: eth0: Link UP Apr 17 23:40:00.277635 systemd-networkd[1166]: eth0: Gained carrier Apr 17 23:40:00.277652 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:40:00.329066 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.24.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:40:00.692286 ignition[1104]: Ignition 2.19.0 Apr 17 23:40:00.692309 ignition[1104]: Stage: fetch-offline Apr 17 23:40:00.692634 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:00.692648 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:00.693115 ignition[1104]: Ignition finished successfully Apr 17 23:40:00.698618 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:40:00.708603 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:40:00.742510 ignition[1176]: Ignition 2.19.0 Apr 17 23:40:00.742535 ignition[1176]: Stage: fetch Apr 17 23:40:00.743034 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:00.743048 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:00.743169 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:00.765708 ignition[1176]: PUT result: OK Apr 17 23:40:00.768912 ignition[1176]: parsed url from cmdline: "" Apr 17 23:40:00.768924 ignition[1176]: no config URL provided Apr 17 23:40:00.768935 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:40:00.768951 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:40:00.768976 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:00.771154 ignition[1176]: PUT result: OK Apr 17 23:40:00.771385 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:40:00.777170 ignition[1176]: GET result: OK Apr 17 23:40:00.777295 ignition[1176]: parsing config with SHA512: cc87116896d3b5ad526a5ce585726590cae88a5de069aca71f0148e4e6456d8f2fb1f5c8d3cd3f0dc8749f5c614c258e4a63fe541f45d2b9c341c0930efe0861 Apr 17 23:40:00.786509 unknown[1176]: fetched base config from "system" Apr 17 23:40:00.786531 unknown[1176]: fetched base config from "system" Apr 17 23:40:00.788175 ignition[1176]: fetch: fetch complete Apr 17 23:40:00.786541 unknown[1176]: fetched user config from "aws" Apr 17 23:40:00.788183 ignition[1176]: fetch: fetch passed Apr 17 23:40:00.793913 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:40:00.788252 ignition[1176]: Ignition finished successfully Apr 17 23:40:00.799577 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:40:00.819835 ignition[1183]: Ignition 2.19.0 Apr 17 23:40:00.819849 ignition[1183]: Stage: kargs Apr 17 23:40:00.820354 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:00.820371 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:00.820486 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:00.821574 ignition[1183]: PUT result: OK Apr 17 23:40:00.833733 ignition[1183]: kargs: kargs passed Apr 17 23:40:00.833837 ignition[1183]: Ignition finished successfully Apr 17 23:40:00.839657 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:40:00.852706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:40:00.900605 ignition[1189]: Ignition 2.19.0 Apr 17 23:40:00.900620 ignition[1189]: Stage: disks Apr 17 23:40:00.901127 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:00.901142 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:00.901284 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:00.904014 ignition[1189]: PUT result: OK Apr 17 23:40:00.913886 ignition[1189]: disks: disks passed Apr 17 23:40:00.913992 ignition[1189]: Ignition finished successfully Apr 17 23:40:00.916740 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:40:00.917515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:40:00.923097 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:40:00.923872 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:40:00.924564 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:40:00.925775 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:40:00.937923 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:40:00.999704 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:40:01.004268 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:40:01.019056 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:40:01.223369 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:40:01.224744 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:40:01.226487 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:40:01.236669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:40:01.246008 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:40:01.249844 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:40:01.249896 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:40:01.249922 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:40:01.254070 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:40:01.262770 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:40:01.271373 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 17 23:40:01.276623 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:40:01.276707 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:40:01.276730 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:40:01.286354 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:40:01.287915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:40:01.575946 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:40:01.592540 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:40:01.598447 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:40:01.603047 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:40:01.802366 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:40:01.812494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:40:01.817659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:40:01.824550 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:40:01.826771 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:40:01.856357 ignition[1328]: INFO : Ignition 2.19.0 Apr 17 23:40:01.856357 ignition[1328]: INFO : Stage: mount Apr 17 23:40:01.859957 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:01.859957 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:01.859957 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:01.863001 ignition[1328]: INFO : PUT result: OK Apr 17 23:40:01.867388 ignition[1328]: INFO : mount: mount passed Apr 17 23:40:01.867388 ignition[1328]: INFO : Ignition finished successfully Apr 17 23:40:01.869602 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:40:01.876544 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:40:01.878118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:40:02.173604 systemd-networkd[1166]: eth0: Gained IPv6LL Apr 17 23:40:02.230613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:40:02.250366 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1341) Apr 17 23:40:02.255194 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:40:02.255267 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:40:02.255290 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:40:02.262355 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:40:02.264929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:40:02.287998 ignition[1358]: INFO : Ignition 2.19.0 Apr 17 23:40:02.287998 ignition[1358]: INFO : Stage: files Apr 17 23:40:02.289546 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:02.289546 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:02.289546 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:02.289546 ignition[1358]: INFO : PUT result: OK Apr 17 23:40:02.292441 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:40:02.293276 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:40:02.293276 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:40:02.319440 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:40:02.320615 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:40:02.320615 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:40:02.320033 unknown[1358]: wrote ssh authorized keys file for user: core Apr 17 23:40:02.323443 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:40:02.323443 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:40:02.413952 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:40:02.569894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:40:02.569894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:40:02.572793 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:40:03.447528 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:40:05.166872 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:40:05.166872 ignition[1358]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:40:05.169237 ignition[1358]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:40:05.170640 ignition[1358]: INFO : files: files passed Apr 17 23:40:05.170640 ignition[1358]: INFO : Ignition finished successfully Apr 17 23:40:05.172319 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:40:05.180568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:40:05.183870 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:40:05.187814 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:40:05.187939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:40:05.202913 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:40:05.202913 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:40:05.206630 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:40:05.208877 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:40:05.209692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:40:05.215576 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:40:05.245387 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:40:05.245538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:40:05.246242 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:40:05.246880 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:40:05.247794 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:40:05.254535 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:40:05.268501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:40:05.274590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:40:05.286919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:40:05.287709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:40:05.289000 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:40:05.290087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:40:05.290272 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:40:05.291496 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:40:05.292399 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:40:05.293194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:40:05.294126 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:40:05.294901 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:40:05.295678 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:40:05.296445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:40:05.297217 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:40:05.298472 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:40:05.299206 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:40:05.299929 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:40:05.300115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:40:05.301196 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:40:05.302135 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:40:05.302823 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:40:05.302970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:40:05.303633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:40:05.303809 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:40:05.305172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:40:05.305453 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:40:05.306187 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:40:05.306353 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:40:05.315714 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:40:05.316428 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:40:05.316714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:40:05.320669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:40:05.321995 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:40:05.322267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:40:05.323795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:40:05.324486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:40:05.339049 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:40:05.339186 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:40:05.343223 ignition[1411]: INFO : Ignition 2.19.0 Apr 17 23:40:05.343223 ignition[1411]: INFO : Stage: umount Apr 17 23:40:05.346153 ignition[1411]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:40:05.346153 ignition[1411]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:40:05.346153 ignition[1411]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:40:05.350081 ignition[1411]: INFO : PUT result: OK Apr 17 23:40:05.358179 ignition[1411]: INFO : umount: umount passed Apr 17 23:40:05.360126 ignition[1411]: INFO : Ignition finished successfully Apr 17 23:40:05.364081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:40:05.365202 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:40:05.365933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:40:05.367922 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:40:05.368044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:40:05.370124 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:40:05.370177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:40:05.370564 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:40:05.370604 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:40:05.370925 systemd[1]: Stopped target network.target - Network. Apr 17 23:40:05.371201 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:40:05.371243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:40:05.371656 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:40:05.372219 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:40:05.375433 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:40:05.375795 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:40:05.376702 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:40:05.377463 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:40:05.377532 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:40:05.378031 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:40:05.378085 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:40:05.378629 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:40:05.378702 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:40:05.379245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:40:05.379309 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:40:05.380066 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:40:05.380710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:40:05.381877 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:40:05.382002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:40:05.383692 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:40:05.383779 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:40:05.384614 systemd-networkd[1166]: eth0: DHCPv6 lease lost Apr 17 23:40:05.386119 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:40:05.386238 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:40:05.387499 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:40:05.387576 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:40:05.394520 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:40:05.395894 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:40:05.395974 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:40:05.398025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:40:05.399317 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:40:05.401705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:40:05.408546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:40:05.408661 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:40:05.414504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:40:05.414578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:40:05.415356 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:40:05.415427 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:40:05.418708 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:40:05.418900 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:40:05.420425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:40:05.420508 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:40:05.422304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:40:05.422372 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:40:05.423132 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:40:05.423200 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:40:05.424864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:40:05.424930 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:40:05.427361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:40:05.427435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:40:05.438561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:40:05.439029 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:40:05.439101 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:40:05.439617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:40:05.439661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:40:05.440400 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:40:05.440489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:40:05.448611 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:40:05.449184 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:40:05.450284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:40:05.455565 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:40:05.466471 systemd[1]: Switching root. Apr 17 23:40:05.493929 systemd-journald[179]: Journal stopped Apr 17 23:40:07.400912 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:40:07.401004 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:40:07.401028 kernel: SELinux: policy capability open_perms=1 Apr 17 23:40:07.401049 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:40:07.401075 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:40:07.401100 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:40:07.401120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:40:07.401151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:40:07.401180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:40:07.401200 kernel: audit: type=1403 audit(1776469206.013:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:40:07.401227 systemd[1]: Successfully loaded SELinux policy in 57.569ms. Apr 17 23:40:07.401341 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.974ms. Apr 17 23:40:07.401368 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:40:07.401390 systemd[1]: Detected virtualization amazon. Apr 17 23:40:07.401411 systemd[1]: Detected architecture x86-64. Apr 17 23:40:07.401438 systemd[1]: Detected first boot. Apr 17 23:40:07.401456 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:40:07.401474 zram_generator::config[1453]: No configuration found. Apr 17 23:40:07.401500 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:40:07.401520 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:40:07.401539 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:40:07.401559 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:40:07.401579 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:40:07.401599 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:40:07.401622 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:40:07.401642 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:40:07.401661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:40:07.401681 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:40:07.401701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:40:07.401720 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:40:07.401739 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:40:07.401759 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:40:07.401778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:40:07.401800 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:40:07.401819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:40:07.401838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:40:07.401857 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:40:07.401877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:40:07.401896 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:40:07.401915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:40:07.401935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:40:07.401958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:40:07.401978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:40:07.401998 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:40:07.402017 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:40:07.402036 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:40:07.402055 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:40:07.402075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:40:07.402094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:40:07.402116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:40:07.402137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:40:07.402157 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:40:07.402176 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:40:07.402195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:40:07.402215 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:40:07.402234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:07.402253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:40:07.402275 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:40:07.402297 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:40:07.402316 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:40:07.402355 systemd[1]: Reached target machines.target - Containers. Apr 17 23:40:07.402375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:40:07.402395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:40:07.402416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:40:07.402437 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:40:07.402457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:40:07.402480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:40:07.402499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:40:07.402517 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:40:07.402537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:40:07.402555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:40:07.402574 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:40:07.402593 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:40:07.402611 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:40:07.402629 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:40:07.402651 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:40:07.402673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:40:07.402696 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:40:07.402722 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:40:07.402746 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:40:07.402770 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:40:07.402794 systemd[1]: Stopped verity-setup.service. Apr 17 23:40:07.402815 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:07.402839 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:40:07.402868 kernel: ACPI: bus type drm_connector registered Apr 17 23:40:07.402893 kernel: loop: module loaded Apr 17 23:40:07.402919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:40:07.402945 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:40:07.402970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:40:07.402996 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:40:07.403028 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:40:07.403054 kernel: fuse: init (API version 7.39) Apr 17 23:40:07.403080 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:40:07.403106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:40:07.403131 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:40:07.403156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:40:07.403180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:40:07.403211 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:40:07.403237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:40:07.403264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:40:07.403293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:40:07.403319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:40:07.403450 systemd-journald[1542]: Collecting audit messages is disabled. Apr 17 23:40:07.403504 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:40:07.403530 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:40:07.403554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:40:07.403580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:40:07.403606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:40:07.403633 systemd-journald[1542]: Journal started Apr 17 23:40:07.403690 systemd-journald[1542]: Runtime Journal (/run/log/journal/ec26eb21c1e6027d498c10261c570b51) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:40:06.923894 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:40:06.954747 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:40:06.955158 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:40:07.407358 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:40:07.409180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:40:07.410585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:40:07.428582 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:40:07.439069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:40:07.446431 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:40:07.449454 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:40:07.449517 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:40:07.452174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:40:07.463082 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:40:07.470448 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:40:07.471714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:40:07.480586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:40:07.484648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:40:07.486542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:40:07.492513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:40:07.493323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:40:07.495545 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:40:07.505807 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:40:07.511315 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:40:07.516623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:40:07.517458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:40:07.518541 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:40:07.519690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:40:07.532511 systemd-journald[1542]: Time spent on flushing to /var/log/journal/ec26eb21c1e6027d498c10261c570b51 is 67.644ms for 983 entries. Apr 17 23:40:07.532511 systemd-journald[1542]: System Journal (/var/log/journal/ec26eb21c1e6027d498c10261c570b51) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:40:07.617641 systemd-journald[1542]: Received client request to flush runtime journal. Apr 17 23:40:07.617708 kernel: loop0: detected capacity change from 0 to 61336 Apr 17 23:40:07.540573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:40:07.554923 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:40:07.558003 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:40:07.566726 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:40:07.585411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:40:07.610449 udevadm[1589]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:40:07.620825 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:40:07.635791 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:40:07.643543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:40:07.659459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:40:07.663287 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:40:07.707929 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:40:07.740775 kernel: loop1: detected capacity change from 0 to 217752 Apr 17 23:40:07.741575 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Apr 17 23:40:07.741601 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Apr 17 23:40:07.754731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:40:07.862300 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:40:07.948582 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:40:08.052362 kernel: loop4: detected capacity change from 0 to 61336 Apr 17 23:40:08.076372 kernel: loop5: detected capacity change from 0 to 217752 Apr 17 23:40:08.119678 kernel: loop6: detected capacity change from 0 to 140768 Apr 17 23:40:08.148358 kernel: loop7: detected capacity change from 0 to 142488 Apr 17 23:40:08.177815 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:40:08.180084 (sd-merge)[1609]: Merged extensions into '/usr'. Apr 17 23:40:08.189759 systemd[1]: Reloading requested from client PID 1582 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:40:08.189777 systemd[1]: Reloading... Apr 17 23:40:08.282363 zram_generator::config[1631]: No configuration found. Apr 17 23:40:08.511525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:40:08.591915 systemd[1]: Reloading finished in 401 ms. Apr 17 23:40:08.619529 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:40:08.620598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:40:08.631602 systemd[1]: Starting ensure-sysext.service... Apr 17 23:40:08.633870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:40:08.637565 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:40:08.664082 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:40:08.664101 systemd[1]: Reloading... Apr 17 23:40:08.684406 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:40:08.684947 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:40:08.686319 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:40:08.686867 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Apr 17 23:40:08.686971 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Apr 17 23:40:08.693222 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:40:08.693261 systemd-tmpfiles[1688]: Skipping /boot Apr 17 23:40:08.722963 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:40:08.722984 systemd-tmpfiles[1688]: Skipping /boot Apr 17 23:40:08.745862 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Apr 17 23:40:08.789368 zram_generator::config[1715]: No configuration found. Apr 17 23:40:08.957531 (udev-worker)[1749]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:40:09.083357 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:40:09.088357 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:40:09.099680 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:40:09.099775 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 23:40:09.102646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:40:09.116414 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:40:09.173497 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Apr 17 23:40:09.188360 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1758) Apr 17 23:40:09.288502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:40:09.288746 systemd[1]: Reloading finished in 624 ms. Apr 17 23:40:09.317502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:40:09.320665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:40:09.360416 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:40:09.407569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:09.413684 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:40:09.424647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:40:09.425564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:40:09.429676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:40:09.439322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:40:09.447005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:40:09.452058 ldconfig[1577]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:40:09.449312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:40:09.456755 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:40:09.473197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:40:09.479803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:40:09.483986 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:40:09.495565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:40:09.496627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:09.501672 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:40:09.503166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:40:09.503311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:40:09.507084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:40:09.507471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:40:09.510028 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:40:09.524963 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:40:09.525475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:40:09.577991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:40:09.582592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:40:09.590828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:09.591305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:40:09.593668 augenrules[1910]: No rules Apr 17 23:40:09.599665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:40:09.603303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:40:09.610672 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:40:09.614349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:40:09.624703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:40:09.625448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:40:09.635092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:40:09.636518 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:40:09.643689 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:40:09.644319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:40:09.648156 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:40:09.654013 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:40:09.655752 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:40:09.658121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:40:09.658367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:40:09.660027 lvm[1916]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:40:09.660289 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:40:09.661394 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:40:09.662760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:40:09.662948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:40:09.664156 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:40:09.664324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:40:09.676401 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:40:09.683256 systemd[1]: Finished ensure-sysext.service. Apr 17 23:40:09.694708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:40:09.694842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:40:09.700511 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:40:09.701110 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:40:09.701661 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:40:09.704310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:40:09.711911 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:40:09.735920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:40:09.737069 lvm[1937]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:40:09.739918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:40:09.763495 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:40:09.769528 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:40:09.846604 systemd-networkd[1885]: lo: Link UP Apr 17 23:40:09.846616 systemd-networkd[1885]: lo: Gained carrier Apr 17 23:40:09.848464 systemd-networkd[1885]: Enumeration completed Apr 17 23:40:09.848614 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:40:09.850800 systemd-networkd[1885]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:40:09.850813 systemd-networkd[1885]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:40:09.853762 systemd-networkd[1885]: eth0: Link UP Apr 17 23:40:09.856485 systemd-networkd[1885]: eth0: Gained carrier Apr 17 23:40:09.856517 systemd-networkd[1885]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:40:09.856574 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:40:09.860745 systemd-resolved[1892]: Positive Trust Anchors: Apr 17 23:40:09.861175 systemd-resolved[1892]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:40:09.861370 systemd-resolved[1892]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:40:09.867284 systemd-resolved[1892]: Defaulting to hostname 'linux'. Apr 17 23:40:09.870445 systemd-networkd[1885]: eth0: DHCPv4 address 172.31.24.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:40:09.871081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:40:09.872582 systemd[1]: Reached target network.target - Network. Apr 17 23:40:09.873192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:40:09.874486 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:40:09.875357 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:40:09.876083 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:40:09.877021 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:40:09.877891 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:40:09.878531 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:40:09.879105 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:40:09.879145 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:40:09.879862 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:40:09.880720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:40:09.882781 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:40:09.889633 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:40:09.890865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:40:09.891533 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:40:09.891966 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:40:09.892428 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:40:09.892530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:40:09.893810 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:40:09.896546 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:40:09.900955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:40:09.904460 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:40:09.907535 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:40:09.908280 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:40:09.911532 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:40:09.914863 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:40:09.919476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:40:09.928557 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:40:09.934498 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:40:09.937707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:40:09.952586 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:40:09.953748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:40:09.954900 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:40:09.956862 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:40:09.969204 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:40:10.035277 jq[1954]: false Apr 17 23:40:10.032180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:40:10.032459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:40:10.039526 extend-filesystems[1955]: Found loop4 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found loop5 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found loop6 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found loop7 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found nvme0n1 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found nvme0n1p1 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found nvme0n1p2 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found nvme0n1p3 Apr 17 23:40:10.044993 extend-filesystems[1955]: Found usr Apr 17 23:40:10.044993 extend-filesystems[1955]: Found nvme0n1p4 Apr 17 23:40:10.056253 extend-filesystems[1955]: Found nvme0n1p6 Apr 17 23:40:10.056253 extend-filesystems[1955]: Found nvme0n1p7 Apr 17 23:40:10.056253 extend-filesystems[1955]: Found nvme0n1p9 Apr 17 23:40:10.056253 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Apr 17 23:40:10.063944 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: ---------------------------------------------------- Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: corporation. Support and training for ntp-4 are Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: available at https://www.nwtime.org/support Apr 17 23:40:10.067591 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: ---------------------------------------------------- Apr 17 23:40:10.063974 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:40:10.063985 ntpd[1957]: ---------------------------------------------------- Apr 17 23:40:10.063997 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:40:10.064008 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:40:10.064018 ntpd[1957]: corporation. Support and training for ntp-4 are Apr 17 23:40:10.064029 ntpd[1957]: available at https://www.nwtime.org/support Apr 17 23:40:10.064040 ntpd[1957]: ---------------------------------------------------- Apr 17 23:40:10.076034 jq[1966]: true Apr 17 23:40:10.076508 ntpd[1957]: proto: precision = 0.077 usec (-24) Apr 17 23:40:10.077901 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: proto: precision = 0.077 usec (-24) Apr 17 23:40:10.079577 ntpd[1957]: basedate set to 2026-04-05 Apr 17 23:40:10.079606 ntpd[1957]: gps base set to 2026-04-05 (week 2413) Apr 17 23:40:10.079745 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: basedate set to 2026-04-05 Apr 17 23:40:10.079745 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: gps base set to 2026-04-05 (week 2413) Apr 17 23:40:10.092899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:40:10.094418 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:40:10.096488 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Apr 17 23:40:10.096691 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:40:10.096920 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:40:10.106248 extend-filesystems[1992]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:40:10.109304 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:40:10.110382 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:40:10.110382 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:40:10.110423 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:40:10.112981 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:40:10.118013 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listen normally on 3 eth0 172.31.24.87:123 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listen normally on 4 lo [::1]:123 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: bind(21) AF_INET6 fe80::44c:c8ff:feee:4857%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: unable to create socket on eth0 (5) for fe80::44c:c8ff:feee:4857%2#123 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: failed to init interface for address fe80::44c:c8ff:feee:4857%2 Apr 17 23:40:10.119100 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Apr 17 23:40:10.118071 ntpd[1957]: Listen normally on 3 eth0 172.31.24.87:123 Apr 17 23:40:10.118114 ntpd[1957]: Listen normally on 4 lo [::1]:123 Apr 17 23:40:10.118171 ntpd[1957]: bind(21) AF_INET6 fe80::44c:c8ff:feee:4857%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:40:10.118194 ntpd[1957]: unable to create socket on eth0 (5) for fe80::44c:c8ff:feee:4857%2#123 Apr 17 23:40:10.118209 ntpd[1957]: failed to init interface for address fe80::44c:c8ff:feee:4857%2 Apr 17 23:40:10.118256 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Apr 17 23:40:10.138018 dbus-daemon[1953]: [system] SELinux support is enabled Apr 17 23:40:10.141401 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:40:10.146690 update_engine[1964]: I20260417 23:40:10.145243 1964 main.cc:92] Flatcar Update Engine starting Apr 17 23:40:10.146966 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:40:10.146966 ntpd[1957]: 17 Apr 23:40:10 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:40:10.144232 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:40:10.141434 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:40:10.144974 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1885 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:40:10.155376 update_engine[1964]: I20260417 23:40:10.148952 1964 update_check_scheduler.cc:74] Next update check in 6m2s Apr 17 23:40:10.162409 (ntainerd)[1996]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:40:10.163406 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:40:10.163451 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:40:10.164439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:40:10.164470 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:40:10.172121 tar[1973]: linux-amd64/LICENSE Apr 17 23:40:10.172121 tar[1973]: linux-amd64/helm Apr 17 23:40:10.176314 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:40:10.184093 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:40:10.192626 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:40:10.199133 jq[1993]: true Apr 17 23:40:10.194970 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:40:10.224542 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:40:10.294855 coreos-metadata[1952]: Apr 17 23:40:10.294 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:40:10.302352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1744) Apr 17 23:40:10.318753 coreos-metadata[1952]: Apr 17 23:40:10.318 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:40:10.322682 coreos-metadata[1952]: Apr 17 23:40:10.322 INFO Fetch successful Apr 17 23:40:10.322682 coreos-metadata[1952]: Apr 17 23:40:10.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:40:10.325354 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:40:10.329708 coreos-metadata[1952]: Apr 17 23:40:10.329 INFO Fetch successful Apr 17 23:40:10.329708 coreos-metadata[1952]: Apr 17 23:40:10.329 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:40:10.337808 coreos-metadata[1952]: Apr 17 23:40:10.337 INFO Fetch successful Apr 17 23:40:10.337808 coreos-metadata[1952]: Apr 17 23:40:10.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:40:10.340642 systemd-logind[1962]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:40:10.340675 systemd-logind[1962]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 17 23:40:10.340700 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:40:10.345794 coreos-metadata[1952]: Apr 17 23:40:10.341 INFO Fetch successful Apr 17 23:40:10.345794 coreos-metadata[1952]: Apr 17 23:40:10.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:40:10.341931 systemd-logind[1962]: New seat seat0. Apr 17 23:40:10.345982 extend-filesystems[1992]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:40:10.345982 extend-filesystems[1992]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:40:10.345982 extend-filesystems[1992]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:40:10.345214 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:40:10.358585 coreos-metadata[1952]: Apr 17 23:40:10.349 INFO Fetch failed with 404: resource not found Apr 17 23:40:10.358585 coreos-metadata[1952]: Apr 17 23:40:10.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:40:10.358585 coreos-metadata[1952]: Apr 17 23:40:10.354 INFO Fetch successful Apr 17 23:40:10.358585 coreos-metadata[1952]: Apr 17 23:40:10.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:40:10.358761 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:40:10.348083 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:40:10.368434 coreos-metadata[1952]: Apr 17 23:40:10.359 INFO Fetch successful Apr 17 23:40:10.368434 coreos-metadata[1952]: Apr 17 23:40:10.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:40:10.348364 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:40:10.370646 coreos-metadata[1952]: Apr 17 23:40:10.370 INFO Fetch successful Apr 17 23:40:10.370646 coreos-metadata[1952]: Apr 17 23:40:10.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:40:10.371819 coreos-metadata[1952]: Apr 17 23:40:10.371 INFO Fetch successful Apr 17 23:40:10.371819 coreos-metadata[1952]: Apr 17 23:40:10.371 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:40:10.380709 coreos-metadata[1952]: Apr 17 23:40:10.380 INFO Fetch successful Apr 17 23:40:10.431073 bash[2038]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:40:10.434777 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:40:10.448796 systemd[1]: Starting sshkeys.service... Apr 17 23:40:10.497239 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:40:10.506570 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:40:10.572856 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:40:10.576052 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:40:10.615156 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:40:10.615375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:40:10.617561 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2014 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:40:10.626863 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:40:10.695256 polkitd[2087]: Started polkitd version 121 Apr 17 23:40:10.738669 locksmithd[2003]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:40:10.745496 coreos-metadata[2057]: Apr 17 23:40:10.745 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:40:10.750303 coreos-metadata[2057]: Apr 17 23:40:10.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:40:10.751019 polkitd[2087]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:40:10.751101 polkitd[2087]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:40:10.753132 coreos-metadata[2057]: Apr 17 23:40:10.752 INFO Fetch successful Apr 17 23:40:10.753132 coreos-metadata[2057]: Apr 17 23:40:10.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:40:10.754385 polkitd[2087]: Finished loading, compiling and executing 2 rules Apr 17 23:40:10.757026 coreos-metadata[2057]: Apr 17 23:40:10.756 INFO Fetch successful Apr 17 23:40:10.758708 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:40:10.758917 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:40:10.759811 unknown[2057]: wrote ssh authorized keys file for user: core Apr 17 23:40:10.762734 polkitd[2087]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:40:10.803800 update-ssh-keys[2128]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:40:10.806876 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:40:10.814692 systemd[1]: Finished sshkeys.service. Apr 17 23:40:10.820813 systemd-hostnamed[2014]: Hostname set to (transient) Apr 17 23:40:10.821190 sshd_keygen[2002]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:40:10.820819 systemd-resolved[1892]: System hostname changed to 'ip-172-31-24-87'. Apr 17 23:40:10.960395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:40:10.968675 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:40:10.993862 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:40:10.994665 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:40:11.000811 containerd[1996]: time="2026-04-17T23:40:11.000706645Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:40:11.007801 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:40:11.046992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:40:11.055822 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:40:11.064482 ntpd[1957]: bind(24) AF_INET6 fe80::44c:c8ff:feee:4857%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:40:11.065679 ntpd[1957]: unable to create socket on eth0 (6) for fe80::44c:c8ff:feee:4857%2#123 Apr 17 23:40:11.067783 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:40:11.072690 ntpd[1957]: 17 Apr 23:40:11 ntpd[1957]: bind(24) AF_INET6 fe80::44c:c8ff:feee:4857%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:40:11.072690 ntpd[1957]: 17 Apr 23:40:11 ntpd[1957]: unable to create socket on eth0 (6) for fe80::44c:c8ff:feee:4857%2#123 Apr 17 23:40:11.072690 ntpd[1957]: 17 Apr 23:40:11 ntpd[1957]: failed to init interface for address fe80::44c:c8ff:feee:4857%2 Apr 17 23:40:11.065697 ntpd[1957]: failed to init interface for address fe80::44c:c8ff:feee:4857%2 Apr 17 23:40:11.068856 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:40:11.080017 containerd[1996]: time="2026-04-17T23:40:11.079932324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.082217 containerd[1996]: time="2026-04-17T23:40:11.082168416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:40:11.082377 containerd[1996]: time="2026-04-17T23:40:11.082353146Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:40:11.082493 containerd[1996]: time="2026-04-17T23:40:11.082475843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.082729743Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.082758898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.082834636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.082853444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.083073249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.083095752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.083120478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.083142523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083353 containerd[1996]: time="2026-04-17T23:40:11.083243340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.083977 containerd[1996]: time="2026-04-17T23:40:11.083954380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:40:11.084225 containerd[1996]: time="2026-04-17T23:40:11.084200804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:40:11.084314 containerd[1996]: time="2026-04-17T23:40:11.084297450Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:40:11.084535 containerd[1996]: time="2026-04-17T23:40:11.084514779Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:40:11.084748 containerd[1996]: time="2026-04-17T23:40:11.084650398Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:40:11.091938 containerd[1996]: time="2026-04-17T23:40:11.091892142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:40:11.092056 containerd[1996]: time="2026-04-17T23:40:11.091962620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:40:11.092056 containerd[1996]: time="2026-04-17T23:40:11.091985138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:40:11.092056 containerd[1996]: time="2026-04-17T23:40:11.092004601Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:40:11.092056 containerd[1996]: time="2026-04-17T23:40:11.092023207Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:40:11.092243 containerd[1996]: time="2026-04-17T23:40:11.092215619Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:40:11.092537 containerd[1996]: time="2026-04-17T23:40:11.092507763Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:40:11.092681 containerd[1996]: time="2026-04-17T23:40:11.092652845Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:40:11.092733 containerd[1996]: time="2026-04-17T23:40:11.092681009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:40:11.092733 containerd[1996]: time="2026-04-17T23:40:11.092702055Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:40:11.092733 containerd[1996]: time="2026-04-17T23:40:11.092723772Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092838 containerd[1996]: time="2026-04-17T23:40:11.092746285Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092838 containerd[1996]: time="2026-04-17T23:40:11.092772545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092838 containerd[1996]: time="2026-04-17T23:40:11.092796181Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092838 containerd[1996]: time="2026-04-17T23:40:11.092820325Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092843272Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092862675Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092882242Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092915310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092937226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092956938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.092987 containerd[1996]: time="2026-04-17T23:40:11.092978105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.092998115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093019586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093037956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093059650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093079820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093111170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093131519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093150539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093173031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093197679Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:40:11.093251 containerd[1996]: time="2026-04-17T23:40:11.093243786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093658 containerd[1996]: time="2026-04-17T23:40:11.093271083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.093658 containerd[1996]: time="2026-04-17T23:40:11.093289647Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095388607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095434994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095453992Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095473793Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095488012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095509063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095525350Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:40:11.097024 containerd[1996]: time="2026-04-17T23:40:11.095540968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:40:11.097463 containerd[1996]: time="2026-04-17T23:40:11.096317878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:40:11.097463 containerd[1996]: time="2026-04-17T23:40:11.096445284Z" level=info msg="Connect containerd service" Apr 17 23:40:11.097463 containerd[1996]: time="2026-04-17T23:40:11.096512418Z" level=info msg="using legacy CRI server" Apr 17 23:40:11.097463 containerd[1996]: time="2026-04-17T23:40:11.096530211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:40:11.097463 containerd[1996]: time="2026-04-17T23:40:11.096686908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:40:11.097814 containerd[1996]: time="2026-04-17T23:40:11.097596705Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:40:11.098504 containerd[1996]: time="2026-04-17T23:40:11.098451091Z" level=info msg="Start subscribing containerd event" Apr 17 23:40:11.103559 containerd[1996]: time="2026-04-17T23:40:11.103525017Z" level=info msg="Start recovering state" Apr 17 23:40:11.104382 containerd[1996]: time="2026-04-17T23:40:11.104002777Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:40:11.104458 containerd[1996]: time="2026-04-17T23:40:11.104441105Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:40:11.104536 containerd[1996]: time="2026-04-17T23:40:11.104520331Z" level=info msg="Start event monitor" Apr 17 23:40:11.104606 containerd[1996]: time="2026-04-17T23:40:11.104593659Z" level=info msg="Start snapshots syncer" Apr 17 23:40:11.104671 containerd[1996]: time="2026-04-17T23:40:11.104659115Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:40:11.104745 containerd[1996]: time="2026-04-17T23:40:11.104732514Z" level=info msg="Start streaming server" Apr 17 23:40:11.104982 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:40:11.105761 containerd[1996]: time="2026-04-17T23:40:11.105737350Z" level=info msg="containerd successfully booted in 0.109904s" Apr 17 23:40:11.261476 systemd-networkd[1885]: eth0: Gained IPv6LL Apr 17 23:40:11.265092 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:40:11.267541 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:40:11.279168 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:40:11.287460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:11.289813 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:40:11.359653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:40:11.385604 amazon-ssm-agent[2175]: Initializing new seelog logger Apr 17 23:40:11.385980 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete Apr 17 23:40:11.385980 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.385980 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.386348 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 processing appconfig overrides Apr 17 23:40:11.386860 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.386860 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.386978 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 processing appconfig overrides Apr 17 23:40:11.387294 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.387294 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.387406 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 processing appconfig overrides Apr 17 23:40:11.387912 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO Proxy environment variables: Apr 17 23:40:11.390253 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.390253 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:40:11.390402 amazon-ssm-agent[2175]: 2026/04/17 23:40:11 processing appconfig overrides Apr 17 23:40:11.431577 tar[1973]: linux-amd64/README.md Apr 17 23:40:11.450303 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:40:11.488008 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO https_proxy: Apr 17 23:40:11.586319 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO http_proxy: Apr 17 23:40:11.684041 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO no_proxy: Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO Agent will take identity from EC2 Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [Registrar] Starting registrar module Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:40:11.732383 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:40:11.782561 amazon-ssm-agent[2175]: 2026-04-17 23:40:11 INFO [CredentialRefresher] Next credential rotation will be in 31.074992913933333 minutes Apr 17 23:40:12.352382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:40:12.361558 systemd[1]: Started sshd@0-172.31.24.87:22-20.229.252.112:52826.service - OpenSSH per-connection server daemon (20.229.252.112:52826). Apr 17 23:40:12.751816 amazon-ssm-agent[2175]: 2026-04-17 23:40:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:40:12.852485 amazon-ssm-agent[2175]: 2026-04-17 23:40:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2200) started Apr 17 23:40:12.952717 amazon-ssm-agent[2175]: 2026-04-17 23:40:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:40:13.364187 sshd[2197]: Accepted publickey for core from 20.229.252.112 port 52826 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:13.368079 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:13.377911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:40:13.386749 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:40:13.392616 systemd-logind[1962]: New session 1 of user core. Apr 17 23:40:13.403959 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:40:13.413746 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:40:13.428232 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:40:13.449576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:13.449843 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:40:13.452146 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:40:13.564863 systemd[2213]: Queued start job for default target default.target. Apr 17 23:40:13.577860 systemd[2213]: Created slice app.slice - User Application Slice. Apr 17 23:40:13.577906 systemd[2213]: Reached target paths.target - Paths. Apr 17 23:40:13.577928 systemd[2213]: Reached target timers.target - Timers. Apr 17 23:40:13.579599 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:40:13.600361 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:40:13.600544 systemd[2213]: Reached target sockets.target - Sockets. Apr 17 23:40:13.600576 systemd[2213]: Reached target basic.target - Basic System. Apr 17 23:40:13.600637 systemd[2213]: Reached target default.target - Main User Target. Apr 17 23:40:13.600678 systemd[2213]: Startup finished in 161ms. Apr 17 23:40:13.600828 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:40:13.607546 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:40:13.608803 systemd[1]: Startup finished in 621ms (kernel) + 9.299s (initrd) + 7.652s (userspace) = 17.573s. Apr 17 23:40:14.064476 ntpd[1957]: Listen normally on 7 eth0 [fe80::44c:c8ff:feee:4857%2]:123 Apr 17 23:40:14.064948 ntpd[1957]: 17 Apr 23:40:14 ntpd[1957]: Listen normally on 7 eth0 [fe80::44c:c8ff:feee:4857%2]:123 Apr 17 23:40:14.325406 systemd[1]: Started sshd@1-172.31.24.87:22-20.229.252.112:52916.service - OpenSSH per-connection server daemon (20.229.252.112:52916). Apr 17 23:40:14.510573 kubelet[2219]: E0417 23:40:14.510514 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:40:14.513149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:40:14.513540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:40:14.514130 systemd[1]: kubelet.service: Consumed 1.029s CPU time. Apr 17 23:40:15.331838 sshd[2239]: Accepted publickey for core from 20.229.252.112 port 52916 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:15.332664 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:15.337408 systemd-logind[1962]: New session 2 of user core. Apr 17 23:40:15.347591 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:40:16.030942 sshd[2239]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:16.034393 systemd[1]: sshd@1-172.31.24.87:22-20.229.252.112:52916.service: Deactivated successfully. Apr 17 23:40:16.036429 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:40:16.037928 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:40:16.039159 systemd-logind[1962]: Removed session 2. Apr 17 23:40:16.193598 systemd[1]: Started sshd@2-172.31.24.87:22-20.229.252.112:52924.service - OpenSSH per-connection server daemon (20.229.252.112:52924). Apr 17 23:40:18.557871 systemd-resolved[1892]: Clock change detected. Flushing caches. Apr 17 23:40:18.664784 sshd[2247]: Accepted publickey for core from 20.229.252.112 port 52924 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:18.666303 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:18.670895 systemd-logind[1962]: New session 3 of user core. Apr 17 23:40:18.677834 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:40:19.338107 sshd[2247]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:19.342379 systemd[1]: sshd@2-172.31.24.87:22-20.229.252.112:52924.service: Deactivated successfully. Apr 17 23:40:19.344516 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:40:19.345308 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:40:19.346356 systemd-logind[1962]: Removed session 3. Apr 17 23:40:19.516044 systemd[1]: Started sshd@3-172.31.24.87:22-20.229.252.112:52934.service - OpenSSH per-connection server daemon (20.229.252.112:52934). Apr 17 23:40:20.489402 sshd[2254]: Accepted publickey for core from 20.229.252.112 port 52934 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:20.490108 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:20.495517 systemd-logind[1962]: New session 4 of user core. Apr 17 23:40:20.501914 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:40:21.170685 sshd[2254]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:21.175335 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:40:21.175634 systemd[1]: sshd@3-172.31.24.87:22-20.229.252.112:52934.service: Deactivated successfully. Apr 17 23:40:21.177713 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:40:21.178845 systemd-logind[1962]: Removed session 4. Apr 17 23:40:21.345981 systemd[1]: Started sshd@4-172.31.24.87:22-20.229.252.112:52942.service - OpenSSH per-connection server daemon (20.229.252.112:52942). Apr 17 23:40:22.316090 sshd[2261]: Accepted publickey for core from 20.229.252.112 port 52942 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:22.317571 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:22.322840 systemd-logind[1962]: New session 5 of user core. Apr 17 23:40:22.329825 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:40:22.869987 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:40:22.870458 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:40:22.891377 sudo[2264]: pam_unix(sudo:session): session closed for user root Apr 17 23:40:23.050498 sshd[2261]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:23.054299 systemd[1]: sshd@4-172.31.24.87:22-20.229.252.112:52942.service: Deactivated successfully. Apr 17 23:40:23.056548 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:40:23.057958 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:40:23.059524 systemd-logind[1962]: Removed session 5. Apr 17 23:40:23.221972 systemd[1]: Started sshd@5-172.31.24.87:22-20.229.252.112:52946.service - OpenSSH per-connection server daemon (20.229.252.112:52946). Apr 17 23:40:24.192621 sshd[2269]: Accepted publickey for core from 20.229.252.112 port 52946 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:24.194266 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:24.199708 systemd-logind[1962]: New session 6 of user core. Apr 17 23:40:24.208841 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:40:24.712704 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:40:24.713097 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:40:24.717114 sudo[2273]: pam_unix(sudo:session): session closed for user root Apr 17 23:40:24.722728 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:40:24.723030 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:40:24.738992 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:40:24.742621 auditctl[2276]: No rules Apr 17 23:40:24.741179 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:40:24.741360 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:40:24.744725 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:40:24.776158 augenrules[2294]: No rules Apr 17 23:40:24.777727 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:40:24.779027 sudo[2272]: pam_unix(sudo:session): session closed for user root Apr 17 23:40:24.938179 sshd[2269]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:24.941673 systemd[1]: sshd@5-172.31.24.87:22-20.229.252.112:52946.service: Deactivated successfully. Apr 17 23:40:24.943829 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:40:24.945575 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:40:24.947030 systemd-logind[1962]: Removed session 6. Apr 17 23:40:25.124981 systemd[1]: Started sshd@6-172.31.24.87:22-20.229.252.112:51700.service - OpenSSH per-connection server daemon (20.229.252.112:51700). Apr 17 23:40:26.129090 sshd[2302]: Accepted publickey for core from 20.229.252.112 port 51700 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:40:26.129782 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:26.130905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:40:26.136890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:26.140858 systemd-logind[1962]: New session 7 of user core. Apr 17 23:40:26.148150 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:40:26.373957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:26.385199 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:40:26.432926 kubelet[2313]: E0417 23:40:26.432870 2313 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:40:26.436840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:40:26.437050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:40:26.664478 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:40:26.664894 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:40:27.188998 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:40:27.190155 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:40:27.764690 dockerd[2335]: time="2026-04-17T23:40:27.764628617Z" level=info msg="Starting up" Apr 17 23:40:27.991379 dockerd[2335]: time="2026-04-17T23:40:27.991329155Z" level=info msg="Loading containers: start." Apr 17 23:40:28.115631 kernel: Initializing XFRM netlink socket Apr 17 23:40:28.146203 (udev-worker)[2356]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:40:28.212895 systemd-networkd[1885]: docker0: Link UP Apr 17 23:40:28.236445 dockerd[2335]: time="2026-04-17T23:40:28.236397443Z" level=info msg="Loading containers: done." Apr 17 23:40:28.252045 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1332187016-merged.mount: Deactivated successfully. Apr 17 23:40:28.255239 dockerd[2335]: time="2026-04-17T23:40:28.255180843Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:40:28.255372 dockerd[2335]: time="2026-04-17T23:40:28.255308565Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:40:28.255477 dockerd[2335]: time="2026-04-17T23:40:28.255450194Z" level=info msg="Daemon has completed initialization" Apr 17 23:40:28.295225 dockerd[2335]: time="2026-04-17T23:40:28.295064856Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:40:28.295563 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:40:29.819095 containerd[1996]: time="2026-04-17T23:40:29.819055629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:40:30.409814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114532495.mount: Deactivated successfully. Apr 17 23:40:32.155944 containerd[1996]: time="2026-04-17T23:40:32.155889175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.157383 containerd[1996]: time="2026-04-17T23:40:32.157322087Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579423" Apr 17 23:40:32.159159 containerd[1996]: time="2026-04-17T23:40:32.158710717Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.162036 containerd[1996]: time="2026-04-17T23:40:32.161998632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.163483 containerd[1996]: time="2026-04-17T23:40:32.163444036Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.344345429s" Apr 17 23:40:32.163623 containerd[1996]: time="2026-04-17T23:40:32.163585217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:40:32.164830 containerd[1996]: time="2026-04-17T23:40:32.164789975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:40:33.918214 containerd[1996]: time="2026-04-17T23:40:33.918158712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:33.919682 containerd[1996]: time="2026-04-17T23:40:33.919612326Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451659" Apr 17 23:40:33.921373 containerd[1996]: time="2026-04-17T23:40:33.920919055Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:33.924540 containerd[1996]: time="2026-04-17T23:40:33.924502695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:33.925866 containerd[1996]: time="2026-04-17T23:40:33.925829408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.760990655s" Apr 17 23:40:33.925987 containerd[1996]: time="2026-04-17T23:40:33.925966013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:40:33.927258 containerd[1996]: time="2026-04-17T23:40:33.927222266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:40:35.123949 containerd[1996]: time="2026-04-17T23:40:35.123898264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:35.125314 containerd[1996]: time="2026-04-17T23:40:35.125260443Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555290" Apr 17 23:40:35.127905 containerd[1996]: time="2026-04-17T23:40:35.126585933Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:35.129711 containerd[1996]: time="2026-04-17T23:40:35.129677623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:35.131095 containerd[1996]: time="2026-04-17T23:40:35.131061547Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.203805251s" Apr 17 23:40:35.131242 containerd[1996]: time="2026-04-17T23:40:35.131221475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:40:35.132327 containerd[1996]: time="2026-04-17T23:40:35.132301434Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:40:36.367403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508568538.mount: Deactivated successfully. Apr 17 23:40:36.688186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:40:36.697641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:36.810845 containerd[1996]: time="2026-04-17T23:40:36.810794180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:36.812258 containerd[1996]: time="2026-04-17T23:40:36.812214466Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699925" Apr 17 23:40:36.815878 containerd[1996]: time="2026-04-17T23:40:36.815842060Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:36.821370 containerd[1996]: time="2026-04-17T23:40:36.819488709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:36.821370 containerd[1996]: time="2026-04-17T23:40:36.820577516Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.688244943s" Apr 17 23:40:36.821370 containerd[1996]: time="2026-04-17T23:40:36.821024004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:40:36.821946 containerd[1996]: time="2026-04-17T23:40:36.821733353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:40:36.919660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:36.932001 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:40:36.975470 kubelet[2556]: E0417 23:40:36.975309 2556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:40:36.978089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:40:36.978346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:40:37.345097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395922681.mount: Deactivated successfully. Apr 17 23:40:38.729445 containerd[1996]: time="2026-04-17T23:40:38.729385868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:38.731429 containerd[1996]: time="2026-04-17T23:40:38.731215200Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Apr 17 23:40:38.733920 containerd[1996]: time="2026-04-17T23:40:38.733390271Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:38.737745 containerd[1996]: time="2026-04-17T23:40:38.737699475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:38.739208 containerd[1996]: time="2026-04-17T23:40:38.739163704Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.917392417s" Apr 17 23:40:38.739332 containerd[1996]: time="2026-04-17T23:40:38.739214219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:40:38.739933 containerd[1996]: time="2026-04-17T23:40:38.739747817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:40:39.217403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874953356.mount: Deactivated successfully. Apr 17 23:40:39.228438 containerd[1996]: time="2026-04-17T23:40:39.228381559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:39.230545 containerd[1996]: time="2026-04-17T23:40:39.230249868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 17 23:40:39.233906 containerd[1996]: time="2026-04-17T23:40:39.232504562Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:39.241336 containerd[1996]: time="2026-04-17T23:40:39.239627212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:39.241336 containerd[1996]: time="2026-04-17T23:40:39.240743639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 500.954481ms" Apr 17 23:40:39.241336 containerd[1996]: time="2026-04-17T23:40:39.240786424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:40:39.241336 containerd[1996]: time="2026-04-17T23:40:39.241328178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:40:39.805348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060097653.mount: Deactivated successfully. Apr 17 23:40:40.971225 containerd[1996]: time="2026-04-17T23:40:40.971159608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:40.973398 containerd[1996]: time="2026-04-17T23:40:40.973152328Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644465" Apr 17 23:40:40.975729 containerd[1996]: time="2026-04-17T23:40:40.975682874Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:40.981737 containerd[1996]: time="2026-04-17T23:40:40.981658926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:40.983487 containerd[1996]: time="2026-04-17T23:40:40.983105265Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.741746923s" Apr 17 23:40:40.983487 containerd[1996]: time="2026-04-17T23:40:40.983154657Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:40:42.051611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:42.057967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:42.096759 systemd[1]: Reloading requested from client PID 2710 ('systemctl') (unit session-7.scope)... Apr 17 23:40:42.096778 systemd[1]: Reloading... Apr 17 23:40:42.225638 zram_generator::config[2751]: No configuration found. Apr 17 23:40:42.384330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:40:42.470741 systemd[1]: Reloading finished in 373 ms. Apr 17 23:40:42.506002 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:40:42.534357 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:42.539678 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:40:42.539924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:42.544932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:42.761681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:42.773068 (kubelet)[2819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:40:42.824422 kubelet[2819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:40:43.233428 kubelet[2819]: I0417 23:40:43.233292 2819 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:40:43.233428 kubelet[2819]: I0417 23:40:43.233337 2819 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:40:43.233428 kubelet[2819]: I0417 23:40:43.233357 2819 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:40:43.233428 kubelet[2819]: I0417 23:40:43.233364 2819 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:40:43.234357 kubelet[2819]: I0417 23:40:43.233780 2819 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:40:43.249615 kubelet[2819]: E0417 23:40:43.247961 2819 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:40:43.249615 kubelet[2819]: I0417 23:40:43.249425 2819 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:40:43.261905 kubelet[2819]: E0417 23:40:43.261846 2819 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:40:43.262058 kubelet[2819]: I0417 23:40:43.261938 2819 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:40:43.269216 kubelet[2819]: I0417 23:40:43.269184 2819 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:40:43.270337 kubelet[2819]: I0417 23:40:43.270278 2819 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:40:43.270525 kubelet[2819]: I0417 23:40:43.270331 2819 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:40:43.270672 kubelet[2819]: I0417 23:40:43.270529 2819 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:40:43.270672 kubelet[2819]: I0417 23:40:43.270543 2819 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:40:43.270765 kubelet[2819]: I0417 23:40:43.270676 2819 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:40:43.274504 kubelet[2819]: I0417 23:40:43.274470 2819 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:40:43.274721 kubelet[2819]: I0417 23:40:43.274698 2819 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:40:43.274721 kubelet[2819]: I0417 23:40:43.274720 2819 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:40:43.274853 kubelet[2819]: I0417 23:40:43.274755 2819 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:40:43.274853 kubelet[2819]: I0417 23:40:43.274768 2819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:40:43.279098 kubelet[2819]: I0417 23:40:43.279069 2819 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:40:43.283119 kubelet[2819]: I0417 23:40:43.282544 2819 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:40:43.283119 kubelet[2819]: I0417 23:40:43.282639 2819 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:40:43.283119 kubelet[2819]: W0417 23:40:43.282726 2819 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:40:43.289358 kubelet[2819]: I0417 23:40:43.288986 2819 server.go:1257] "Started kubelet" Apr 17 23:40:43.290916 kubelet[2819]: I0417 23:40:43.290603 2819 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:40:43.292613 kubelet[2819]: I0417 23:40:43.292569 2819 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:40:43.302585 kubelet[2819]: I0417 23:40:43.301306 2819 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:40:43.302895 kubelet[2819]: E0417 23:40:43.299884 2819 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-87.18a7495d58a51c22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-87,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-87,},FirstTimestamp:2026-04-17 23:40:43.288951842 +0000 UTC m=+0.511209109,LastTimestamp:2026-04-17 23:40:43.288951842 +0000 UTC m=+0.511209109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-87,}" Apr 17 23:40:43.302895 kubelet[2819]: I0417 23:40:43.297778 2819 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:40:43.303059 kubelet[2819]: I0417 23:40:43.297791 2819 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:40:43.303121 kubelet[2819]: E0417 23:40:43.297946 2819 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-24-87\" not found" Apr 17 23:40:43.303286 kubelet[2819]: E0417 23:40:43.303249 2819 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-87?timeout=10s\": dial tcp 172.31.24.87:6443: connect: connection refused" interval="200ms" Apr 17 23:40:43.306502 kubelet[2819]: I0417 23:40:43.296306 2819 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:40:43.306629 kubelet[2819]: I0417 23:40:43.292709 2819 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:40:43.306783 kubelet[2819]: I0417 23:40:43.306751 2819 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:40:43.306980 kubelet[2819]: I0417 23:40:43.306962 2819 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:40:43.309646 kubelet[2819]: I0417 23:40:43.309376 2819 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:40:43.309646 kubelet[2819]: I0417 23:40:43.309498 2819 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:40:43.311920 kubelet[2819]: I0417 23:40:43.311905 2819 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:40:43.317155 kubelet[2819]: I0417 23:40:43.317059 2819 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:40:43.347630 kubelet[2819]: I0417 23:40:43.347502 2819 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:40:43.347630 kubelet[2819]: I0417 23:40:43.347539 2819 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:40:43.347630 kubelet[2819]: I0417 23:40:43.347558 2819 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:40:43.353359 kubelet[2819]: I0417 23:40:43.353317 2819 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:40:43.354048 kubelet[2819]: I0417 23:40:43.353696 2819 policy_none.go:50] "Start" Apr 17 23:40:43.354048 kubelet[2819]: I0417 23:40:43.353727 2819 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:40:43.354048 kubelet[2819]: I0417 23:40:43.353742 2819 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:40:43.355170 kubelet[2819]: I0417 23:40:43.355151 2819 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:40:43.356981 kubelet[2819]: I0417 23:40:43.356624 2819 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:40:43.356981 kubelet[2819]: I0417 23:40:43.356657 2819 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:40:43.356981 kubelet[2819]: E0417 23:40:43.356703 2819 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:40:43.358616 kubelet[2819]: I0417 23:40:43.358545 2819 policy_none.go:44] "Start" Apr 17 23:40:43.370624 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:40:43.380291 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:40:43.384834 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:40:43.394860 kubelet[2819]: E0417 23:40:43.394833 2819 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:40:43.395574 kubelet[2819]: I0417 23:40:43.395264 2819 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:40:43.395574 kubelet[2819]: I0417 23:40:43.395282 2819 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:40:43.397563 kubelet[2819]: I0417 23:40:43.396055 2819 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:40:43.398780 kubelet[2819]: E0417 23:40:43.398757 2819 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:40:43.398877 kubelet[2819]: E0417 23:40:43.398805 2819 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-87\" not found" Apr 17 23:40:43.469949 systemd[1]: Created slice kubepods-burstable-pod8c11d26b1c9c9ef8c1b125bb970cd4f4.slice - libcontainer container kubepods-burstable-pod8c11d26b1c9c9ef8c1b125bb970cd4f4.slice. Apr 17 23:40:43.477683 kubelet[2819]: E0417 23:40:43.477376 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:43.480337 systemd[1]: Created slice kubepods-burstable-podaf6d2c35854a714c3e690adb3c478bd9.slice - libcontainer container kubepods-burstable-podaf6d2c35854a714c3e690adb3c478bd9.slice. Apr 17 23:40:43.490236 kubelet[2819]: E0417 23:40:43.490138 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:43.494636 systemd[1]: Created slice kubepods-burstable-pod26332735a3f7753018d12260905ab5ff.slice - libcontainer container kubepods-burstable-pod26332735a3f7753018d12260905ab5ff.slice. Apr 17 23:40:43.497492 kubelet[2819]: I0417 23:40:43.497454 2819 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:43.497877 kubelet[2819]: E0417 23:40:43.497842 2819 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.24.87:6443/api/v1/nodes\": dial tcp 172.31.24.87:6443: connect: connection refused" node="ip-172-31-24-87" Apr 17 23:40:43.498088 kubelet[2819]: E0417 23:40:43.498060 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:43.504078 kubelet[2819]: E0417 23:40:43.504036 2819 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-87?timeout=10s\": dial tcp 172.31.24.87:6443: connect: connection refused" interval="400ms" Apr 17 23:40:43.513535 kubelet[2819]: I0417 23:40:43.513470 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:43.513939 kubelet[2819]: I0417 23:40:43.513617 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:43.513939 kubelet[2819]: I0417 23:40:43.513654 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:43.513939 kubelet[2819]: I0417 23:40:43.513676 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:43.513939 kubelet[2819]: I0417 23:40:43.513749 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:43.513939 kubelet[2819]: I0417 23:40:43.513768 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:43.514122 kubelet[2819]: I0417 23:40:43.513782 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:43.514122 kubelet[2819]: I0417 23:40:43.513799 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:43.514122 kubelet[2819]: I0417 23:40:43.513823 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26332735a3f7753018d12260905ab5ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-87\" (UID: \"26332735a3f7753018d12260905ab5ff\") " pod="kube-system/kube-scheduler-ip-172-31-24-87" Apr 17 23:40:43.701122 kubelet[2819]: I0417 23:40:43.701088 2819 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:43.701488 kubelet[2819]: E0417 23:40:43.701457 2819 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.24.87:6443/api/v1/nodes\": dial tcp 172.31.24.87:6443: connect: connection refused" node="ip-172-31-24-87" Apr 17 23:40:43.784100 containerd[1996]: time="2026-04-17T23:40:43.783547302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-87,Uid:8c11d26b1c9c9ef8c1b125bb970cd4f4,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:43.795121 containerd[1996]: time="2026-04-17T23:40:43.795075922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-87,Uid:af6d2c35854a714c3e690adb3c478bd9,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:43.802912 containerd[1996]: time="2026-04-17T23:40:43.802873966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-87,Uid:26332735a3f7753018d12260905ab5ff,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:43.904518 kubelet[2819]: E0417 23:40:43.904467 2819 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-87?timeout=10s\": dial tcp 172.31.24.87:6443: connect: connection refused" interval="800ms" Apr 17 23:40:44.103423 kubelet[2819]: I0417 23:40:44.103271 2819 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:44.104090 kubelet[2819]: E0417 23:40:44.103680 2819 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.24.87:6443/api/v1/nodes\": dial tcp 172.31.24.87:6443: connect: connection refused" node="ip-172-31-24-87" Apr 17 23:40:44.311098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615739964.mount: Deactivated successfully. Apr 17 23:40:44.329231 containerd[1996]: time="2026-04-17T23:40:44.329180437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:40:44.331131 containerd[1996]: time="2026-04-17T23:40:44.331020598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:40:44.333207 containerd[1996]: time="2026-04-17T23:40:44.333160157Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:40:44.335245 containerd[1996]: time="2026-04-17T23:40:44.335203711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:40:44.337324 containerd[1996]: time="2026-04-17T23:40:44.337270939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:40:44.339641 containerd[1996]: time="2026-04-17T23:40:44.339576943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:40:44.341357 containerd[1996]: time="2026-04-17T23:40:44.341099140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:40:44.345188 containerd[1996]: time="2026-04-17T23:40:44.345142979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:40:44.346083 containerd[1996]: time="2026-04-17T23:40:44.346042624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.877385ms" Apr 17 23:40:44.348492 containerd[1996]: time="2026-04-17T23:40:44.348448202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.498675ms" Apr 17 23:40:44.349014 containerd[1996]: time="2026-04-17T23:40:44.348980672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.338757ms" Apr 17 23:40:44.558568 containerd[1996]: time="2026-04-17T23:40:44.557663594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:44.558568 containerd[1996]: time="2026-04-17T23:40:44.557736560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:44.558568 containerd[1996]: time="2026-04-17T23:40:44.557758944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.558568 containerd[1996]: time="2026-04-17T23:40:44.557857599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.566983 containerd[1996]: time="2026-04-17T23:40:44.566729098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:44.566983 containerd[1996]: time="2026-04-17T23:40:44.566816792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:44.566983 containerd[1996]: time="2026-04-17T23:40:44.566839390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.567640 containerd[1996]: time="2026-04-17T23:40:44.566960266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.575352 containerd[1996]: time="2026-04-17T23:40:44.574842902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:44.575352 containerd[1996]: time="2026-04-17T23:40:44.574925174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:44.575352 containerd[1996]: time="2026-04-17T23:40:44.574964228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.575352 containerd[1996]: time="2026-04-17T23:40:44.575106506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:44.597866 systemd[1]: Started cri-containerd-a94d0829648a3bfb02d3ef9faee096dd61abe94aa16e671ba4217938e6ee5e2d.scope - libcontainer container a94d0829648a3bfb02d3ef9faee096dd61abe94aa16e671ba4217938e6ee5e2d. Apr 17 23:40:44.622137 systemd[1]: Started cri-containerd-5053c32e18aa995650d4639c4a1d46ec2eee27b9d8c66a5f4091c1b4c08ce9d1.scope - libcontainer container 5053c32e18aa995650d4639c4a1d46ec2eee27b9d8c66a5f4091c1b4c08ce9d1. Apr 17 23:40:44.640817 systemd[1]: Started cri-containerd-8296f5a4f19a6d173c1231791b0d81ba0851daaa23aa1c96125e75fac2d9e7be.scope - libcontainer container 8296f5a4f19a6d173c1231791b0d81ba0851daaa23aa1c96125e75fac2d9e7be. Apr 17 23:40:44.705098 kubelet[2819]: E0417 23:40:44.705020 2819 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-87?timeout=10s\": dial tcp 172.31.24.87:6443: connect: connection refused" interval="1.6s" Apr 17 23:40:44.732203 containerd[1996]: time="2026-04-17T23:40:44.732153531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-87,Uid:26332735a3f7753018d12260905ab5ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a94d0829648a3bfb02d3ef9faee096dd61abe94aa16e671ba4217938e6ee5e2d\"" Apr 17 23:40:44.732991 containerd[1996]: time="2026-04-17T23:40:44.732897515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-87,Uid:8c11d26b1c9c9ef8c1b125bb970cd4f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5053c32e18aa995650d4639c4a1d46ec2eee27b9d8c66a5f4091c1b4c08ce9d1\"" Apr 17 23:40:44.745840 containerd[1996]: time="2026-04-17T23:40:44.745483080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-87,Uid:af6d2c35854a714c3e690adb3c478bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8296f5a4f19a6d173c1231791b0d81ba0851daaa23aa1c96125e75fac2d9e7be\"" Apr 17 23:40:44.750345 containerd[1996]: time="2026-04-17T23:40:44.750298699Z" level=info msg="CreateContainer within sandbox \"5053c32e18aa995650d4639c4a1d46ec2eee27b9d8c66a5f4091c1b4c08ce9d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:40:44.751846 containerd[1996]: time="2026-04-17T23:40:44.751810332Z" level=info msg="CreateContainer within sandbox \"a94d0829648a3bfb02d3ef9faee096dd61abe94aa16e671ba4217938e6ee5e2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:40:44.756500 containerd[1996]: time="2026-04-17T23:40:44.756461261Z" level=info msg="CreateContainer within sandbox \"8296f5a4f19a6d173c1231791b0d81ba0851daaa23aa1c96125e75fac2d9e7be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:40:44.795431 containerd[1996]: time="2026-04-17T23:40:44.795382123Z" level=info msg="CreateContainer within sandbox \"a94d0829648a3bfb02d3ef9faee096dd61abe94aa16e671ba4217938e6ee5e2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5a53335225cf81e3469cae48a5af87f1df74f5e6b4e041c9cb75c96aaa83cd6\"" Apr 17 23:40:44.796201 containerd[1996]: time="2026-04-17T23:40:44.796167958Z" level=info msg="StartContainer for \"a5a53335225cf81e3469cae48a5af87f1df74f5e6b4e041c9cb75c96aaa83cd6\"" Apr 17 23:40:44.811639 containerd[1996]: time="2026-04-17T23:40:44.811508976Z" level=info msg="CreateContainer within sandbox \"8296f5a4f19a6d173c1231791b0d81ba0851daaa23aa1c96125e75fac2d9e7be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2334d3e832121b900d25baa1e327ad92a69c914fdf6cb7934ddcceaca1dc750c\"" Apr 17 23:40:44.813044 containerd[1996]: time="2026-04-17T23:40:44.812988561Z" level=info msg="StartContainer for \"2334d3e832121b900d25baa1e327ad92a69c914fdf6cb7934ddcceaca1dc750c\"" Apr 17 23:40:44.822002 containerd[1996]: time="2026-04-17T23:40:44.821875610Z" level=info msg="CreateContainer within sandbox \"5053c32e18aa995650d4639c4a1d46ec2eee27b9d8c66a5f4091c1b4c08ce9d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bc4beff8cdfa33e09802d65404db7857b18a62f5bcd2d8202c166bc355926dc\"" Apr 17 23:40:44.822732 containerd[1996]: time="2026-04-17T23:40:44.822642073Z" level=info msg="StartContainer for \"8bc4beff8cdfa33e09802d65404db7857b18a62f5bcd2d8202c166bc355926dc\"" Apr 17 23:40:44.837978 systemd[1]: Started cri-containerd-a5a53335225cf81e3469cae48a5af87f1df74f5e6b4e041c9cb75c96aaa83cd6.scope - libcontainer container a5a53335225cf81e3469cae48a5af87f1df74f5e6b4e041c9cb75c96aaa83cd6. Apr 17 23:40:44.893143 systemd[1]: Started cri-containerd-2334d3e832121b900d25baa1e327ad92a69c914fdf6cb7934ddcceaca1dc750c.scope - libcontainer container 2334d3e832121b900d25baa1e327ad92a69c914fdf6cb7934ddcceaca1dc750c. Apr 17 23:40:44.896173 systemd[1]: Started cri-containerd-8bc4beff8cdfa33e09802d65404db7857b18a62f5bcd2d8202c166bc355926dc.scope - libcontainer container 8bc4beff8cdfa33e09802d65404db7857b18a62f5bcd2d8202c166bc355926dc. Apr 17 23:40:44.912632 kubelet[2819]: I0417 23:40:44.909993 2819 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:44.913098 kubelet[2819]: E0417 23:40:44.911586 2819 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.24.87:6443/api/v1/nodes\": dial tcp 172.31.24.87:6443: connect: connection refused" node="ip-172-31-24-87" Apr 17 23:40:44.927364 containerd[1996]: time="2026-04-17T23:40:44.927318899Z" level=info msg="StartContainer for \"a5a53335225cf81e3469cae48a5af87f1df74f5e6b4e041c9cb75c96aaa83cd6\" returns successfully" Apr 17 23:40:44.996749 containerd[1996]: time="2026-04-17T23:40:44.996690572Z" level=info msg="StartContainer for \"2334d3e832121b900d25baa1e327ad92a69c914fdf6cb7934ddcceaca1dc750c\" returns successfully" Apr 17 23:40:45.009916 containerd[1996]: time="2026-04-17T23:40:45.009853918Z" level=info msg="StartContainer for \"8bc4beff8cdfa33e09802d65404db7857b18a62f5bcd2d8202c166bc355926dc\" returns successfully" Apr 17 23:40:45.392260 kubelet[2819]: E0417 23:40:45.392219 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:45.398368 kubelet[2819]: E0417 23:40:45.398334 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:45.404923 kubelet[2819]: E0417 23:40:45.404889 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:46.404000 kubelet[2819]: E0417 23:40:46.403752 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:46.404000 kubelet[2819]: E0417 23:40:46.403832 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:46.518181 kubelet[2819]: I0417 23:40:46.517839 2819 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:46.717049 kubelet[2819]: E0417 23:40:46.716383 2819 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:46.928300 kubelet[2819]: E0417 23:40:46.928244 2819 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-87\" not found" node="ip-172-31-24-87" Apr 17 23:40:47.021458 kubelet[2819]: I0417 23:40:47.021056 2819 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-24-87" Apr 17 23:40:47.098352 kubelet[2819]: I0417 23:40:47.098317 2819 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:47.110626 kubelet[2819]: E0417 23:40:47.110383 2819 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:47.110626 kubelet[2819]: I0417 23:40:47.110434 2819 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:47.114697 kubelet[2819]: E0417 23:40:47.114087 2819 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:47.114697 kubelet[2819]: I0417 23:40:47.114133 2819 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-87" Apr 17 23:40:47.117156 kubelet[2819]: E0417 23:40:47.117102 2819 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-87" Apr 17 23:40:47.281323 kubelet[2819]: I0417 23:40:47.280695 2819 apiserver.go:52] "Watching apiserver" Apr 17 23:40:47.303651 kubelet[2819]: I0417 23:40:47.303606 2819 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:40:48.765527 systemd[1]: Reloading requested from client PID 3108 ('systemctl') (unit session-7.scope)... Apr 17 23:40:48.765545 systemd[1]: Reloading... Apr 17 23:40:48.867629 zram_generator::config[3147]: No configuration found. Apr 17 23:40:48.999101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:40:49.101951 systemd[1]: Reloading finished in 335 ms. Apr 17 23:40:49.150181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:49.164197 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:40:49.164483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:49.169919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:40:49.407521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:40:49.420847 (kubelet)[3208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:40:49.481472 kubelet[3208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:40:49.494406 kubelet[3208]: I0417 23:40:49.494346 3208 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:40:49.494406 kubelet[3208]: I0417 23:40:49.494396 3208 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:40:49.494406 kubelet[3208]: I0417 23:40:49.494416 3208 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:40:49.494406 kubelet[3208]: I0417 23:40:49.494424 3208 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:40:49.494819 kubelet[3208]: I0417 23:40:49.494792 3208 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:40:49.499158 kubelet[3208]: I0417 23:40:49.499121 3208 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:40:49.502092 kubelet[3208]: I0417 23:40:49.501254 3208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:40:49.510832 kubelet[3208]: E0417 23:40:49.510790 3208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:40:49.511048 kubelet[3208]: I0417 23:40:49.511032 3208 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:40:49.515492 kubelet[3208]: I0417 23:40:49.515455 3208 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:40:49.517478 kubelet[3208]: I0417 23:40:49.517433 3208 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:40:49.518705 kubelet[3208]: I0417 23:40:49.517640 3208 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:40:49.518889 kubelet[3208]: I0417 23:40:49.518714 3208 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:40:49.518889 kubelet[3208]: I0417 23:40:49.518730 3208 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:40:49.518889 kubelet[3208]: I0417 23:40:49.518765 3208 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:40:49.519024 kubelet[3208]: I0417 23:40:49.519017 3208 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:40:49.520751 kubelet[3208]: I0417 23:40:49.520724 3208 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:40:49.520751 kubelet[3208]: I0417 23:40:49.520752 3208 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:40:49.520915 kubelet[3208]: I0417 23:40:49.520777 3208 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:40:49.520915 kubelet[3208]: I0417 23:40:49.520790 3208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:40:49.527180 kubelet[3208]: I0417 23:40:49.527153 3208 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:40:49.530295 kubelet[3208]: I0417 23:40:49.530268 3208 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:40:49.530438 kubelet[3208]: I0417 23:40:49.530315 3208 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:40:49.541400 kubelet[3208]: I0417 23:40:49.541371 3208 server.go:1257] "Started kubelet" Apr 17 23:40:49.544635 kubelet[3208]: I0417 23:40:49.544559 3208 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:40:49.545622 kubelet[3208]: I0417 23:40:49.545488 3208 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:40:49.545622 kubelet[3208]: I0417 23:40:49.545568 3208 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:40:49.546654 kubelet[3208]: I0417 23:40:49.546459 3208 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:40:49.547423 kubelet[3208]: I0417 23:40:49.546883 3208 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:40:49.549734 kubelet[3208]: I0417 23:40:49.549712 3208 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:40:49.556561 kubelet[3208]: I0417 23:40:49.556306 3208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:40:49.560884 kubelet[3208]: I0417 23:40:49.560860 3208 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:40:49.561240 kubelet[3208]: I0417 23:40:49.561224 3208 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:40:49.562185 kubelet[3208]: I0417 23:40:49.561481 3208 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:40:49.563516 kubelet[3208]: I0417 23:40:49.563480 3208 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:40:49.563803 kubelet[3208]: I0417 23:40:49.563783 3208 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:40:49.565145 kubelet[3208]: E0417 23:40:49.565110 3208 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:40:49.565736 kubelet[3208]: I0417 23:40:49.565715 3208 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:40:49.578921 kubelet[3208]: I0417 23:40:49.578867 3208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:40:49.580196 kubelet[3208]: I0417 23:40:49.580174 3208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:40:49.580290 kubelet[3208]: I0417 23:40:49.580283 3208 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:40:49.580357 kubelet[3208]: I0417 23:40:49.580351 3208 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:40:49.580489 kubelet[3208]: E0417 23:40:49.580475 3208 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:40:49.633429 kubelet[3208]: I0417 23:40:49.633406 3208 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:40:49.633645 kubelet[3208]: I0417 23:40:49.633628 3208 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:40:49.633811 kubelet[3208]: I0417 23:40:49.633740 3208 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:40:49.634127 kubelet[3208]: I0417 23:40:49.634064 3208 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:40:49.634127 kubelet[3208]: I0417 23:40:49.634082 3208 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:40:49.634712 kubelet[3208]: I0417 23:40:49.634300 3208 policy_none.go:50] "Start" Apr 17 23:40:49.634712 kubelet[3208]: I0417 23:40:49.634320 3208 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:40:49.634712 kubelet[3208]: I0417 23:40:49.634335 3208 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:40:49.636312 kubelet[3208]: I0417 23:40:49.636290 3208 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:40:49.636432 kubelet[3208]: I0417 23:40:49.636422 3208 policy_none.go:44] "Start" Apr 17 23:40:49.643857 kubelet[3208]: E0417 23:40:49.643816 3208 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:40:49.644059 kubelet[3208]: I0417 23:40:49.644045 3208 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:40:49.644143 kubelet[3208]: I0417 23:40:49.644062 3208 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:40:49.646940 kubelet[3208]: I0417 23:40:49.645080 3208 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:40:49.647164 kubelet[3208]: E0417 23:40:49.647141 3208 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:40:49.681285 kubelet[3208]: I0417 23:40:49.681248 3208 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.682090 kubelet[3208]: I0417 23:40:49.682058 3208 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:49.682565 kubelet[3208]: I0417 23:40:49.682538 3208 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-87" Apr 17 23:40:49.755711 kubelet[3208]: I0417 23:40:49.755684 3208 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-24-87" Apr 17 23:40:49.763096 kubelet[3208]: I0417 23:40:49.762818 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.763096 kubelet[3208]: I0417 23:40:49.762860 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.763096 kubelet[3208]: I0417 23:40:49.762888 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:49.763096 kubelet[3208]: I0417 23:40:49.762912 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:49.763096 kubelet[3208]: I0417 23:40:49.762933 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.763428 kubelet[3208]: I0417 23:40:49.762954 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.763428 kubelet[3208]: I0417 23:40:49.762993 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af6d2c35854a714c3e690adb3c478bd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-87\" (UID: \"af6d2c35854a714c3e690adb3c478bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:49.763428 kubelet[3208]: I0417 23:40:49.763018 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26332735a3f7753018d12260905ab5ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-87\" (UID: \"26332735a3f7753018d12260905ab5ff\") " pod="kube-system/kube-scheduler-ip-172-31-24-87" Apr 17 23:40:49.763428 kubelet[3208]: I0417 23:40:49.763045 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c11d26b1c9c9ef8c1b125bb970cd4f4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-87\" (UID: \"8c11d26b1c9c9ef8c1b125bb970cd4f4\") " pod="kube-system/kube-apiserver-ip-172-31-24-87" Apr 17 23:40:49.766075 kubelet[3208]: I0417 23:40:49.765734 3208 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-24-87" Apr 17 23:40:49.766075 kubelet[3208]: I0417 23:40:49.765823 3208 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-24-87" Apr 17 23:40:50.524218 kubelet[3208]: I0417 23:40:50.523901 3208 apiserver.go:52] "Watching apiserver" Apr 17 23:40:50.562491 kubelet[3208]: I0417 23:40:50.562411 3208 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:40:50.623393 kubelet[3208]: I0417 23:40:50.623363 3208 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:50.635029 kubelet[3208]: E0417 23:40:50.634834 3208 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-87\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-87" Apr 17 23:40:50.679479 kubelet[3208]: I0417 23:40:50.679406 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-87" podStartSLOduration=1.679385275 podStartE2EDuration="1.679385275s" podCreationTimestamp="2026-04-17 23:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:50.666474811 +0000 UTC m=+1.240729264" watchObservedRunningTime="2026-04-17 23:40:50.679385275 +0000 UTC m=+1.253639726" Apr 17 23:40:50.696930 kubelet[3208]: I0417 23:40:50.696870 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-87" podStartSLOduration=1.6968568739999998 podStartE2EDuration="1.696856874s" podCreationTimestamp="2026-04-17 23:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:50.679813503 +0000 UTC m=+1.254067956" watchObservedRunningTime="2026-04-17 23:40:50.696856874 +0000 UTC m=+1.271111323" Apr 17 23:40:50.709862 kubelet[3208]: I0417 23:40:50.709306 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-87" podStartSLOduration=1.709289981 podStartE2EDuration="1.709289981s" podCreationTimestamp="2026-04-17 23:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:50.697066207 +0000 UTC m=+1.271320659" watchObservedRunningTime="2026-04-17 23:40:50.709289981 +0000 UTC m=+1.283544447" Apr 17 23:40:54.732332 kubelet[3208]: I0417 23:40:54.732300 3208 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:40:54.734242 containerd[1996]: time="2026-04-17T23:40:54.734141470Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:40:54.734695 kubelet[3208]: I0417 23:40:54.734455 3208 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:40:55.926453 systemd[1]: Created slice kubepods-besteffort-podac83fddf_4353_4407_8b2a_5f32cdc10830.slice - libcontainer container kubepods-besteffort-podac83fddf_4353_4407_8b2a_5f32cdc10830.slice. Apr 17 23:40:56.007374 kubelet[3208]: I0417 23:40:56.007058 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac83fddf-4353-4407-8b2a-5f32cdc10830-kube-proxy\") pod \"kube-proxy-lszsk\" (UID: \"ac83fddf-4353-4407-8b2a-5f32cdc10830\") " pod="kube-system/kube-proxy-lszsk" Apr 17 23:40:56.007374 kubelet[3208]: I0417 23:40:56.007116 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac83fddf-4353-4407-8b2a-5f32cdc10830-xtables-lock\") pod \"kube-proxy-lszsk\" (UID: \"ac83fddf-4353-4407-8b2a-5f32cdc10830\") " pod="kube-system/kube-proxy-lszsk" Apr 17 23:40:56.007374 kubelet[3208]: I0417 23:40:56.007142 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac83fddf-4353-4407-8b2a-5f32cdc10830-lib-modules\") pod \"kube-proxy-lszsk\" (UID: \"ac83fddf-4353-4407-8b2a-5f32cdc10830\") " pod="kube-system/kube-proxy-lszsk" Apr 17 23:40:56.007374 kubelet[3208]: I0417 23:40:56.007162 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brplh\" (UniqueName: \"kubernetes.io/projected/ac83fddf-4353-4407-8b2a-5f32cdc10830-kube-api-access-brplh\") pod \"kube-proxy-lszsk\" (UID: \"ac83fddf-4353-4407-8b2a-5f32cdc10830\") " pod="kube-system/kube-proxy-lszsk" Apr 17 23:40:56.037411 systemd[1]: Created slice kubepods-besteffort-podcd6b81b9_a826_423a_904e_616b07f2144c.slice - libcontainer container kubepods-besteffort-podcd6b81b9_a826_423a_904e_616b07f2144c.slice. Apr 17 23:40:56.108142 kubelet[3208]: I0417 23:40:56.108094 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgdd\" (UniqueName: \"kubernetes.io/projected/cd6b81b9-a826-423a-904e-616b07f2144c-kube-api-access-rsgdd\") pod \"tigera-operator-6cf4cccc57-mrrkn\" (UID: \"cd6b81b9-a826-423a-904e-616b07f2144c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-mrrkn" Apr 17 23:40:56.108316 kubelet[3208]: I0417 23:40:56.108191 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd6b81b9-a826-423a-904e-616b07f2144c-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-mrrkn\" (UID: \"cd6b81b9-a826-423a-904e-616b07f2144c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-mrrkn" Apr 17 23:40:56.239447 containerd[1996]: time="2026-04-17T23:40:56.239037425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lszsk,Uid:ac83fddf-4353-4407-8b2a-5f32cdc10830,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:56.277928 containerd[1996]: time="2026-04-17T23:40:56.276866704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:56.277928 containerd[1996]: time="2026-04-17T23:40:56.276942814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:56.277928 containerd[1996]: time="2026-04-17T23:40:56.276960433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:56.277928 containerd[1996]: time="2026-04-17T23:40:56.277053564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:56.311992 systemd[1]: Started cri-containerd-68d85abf49e2bffca23b29c59e899b3923ba49c6824aa68b0365d484675bbd45.scope - libcontainer container 68d85abf49e2bffca23b29c59e899b3923ba49c6824aa68b0365d484675bbd45. Apr 17 23:40:56.341844 containerd[1996]: time="2026-04-17T23:40:56.341797423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lszsk,Uid:ac83fddf-4353-4407-8b2a-5f32cdc10830,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d85abf49e2bffca23b29c59e899b3923ba49c6824aa68b0365d484675bbd45\"" Apr 17 23:40:56.347050 containerd[1996]: time="2026-04-17T23:40:56.347007992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-mrrkn,Uid:cd6b81b9-a826-423a-904e-616b07f2144c,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:40:56.351723 containerd[1996]: time="2026-04-17T23:40:56.351567874Z" level=info msg="CreateContainer within sandbox \"68d85abf49e2bffca23b29c59e899b3923ba49c6824aa68b0365d484675bbd45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:40:56.386729 containerd[1996]: time="2026-04-17T23:40:56.386499317Z" level=info msg="CreateContainer within sandbox \"68d85abf49e2bffca23b29c59e899b3923ba49c6824aa68b0365d484675bbd45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f54ed0097ca03ff829bf409ceda871561a5c3fad18b21c6db119a4cf36cb32aa\"" Apr 17 23:40:56.387241 containerd[1996]: time="2026-04-17T23:40:56.387202194Z" level=info msg="StartContainer for \"f54ed0097ca03ff829bf409ceda871561a5c3fad18b21c6db119a4cf36cb32aa\"" Apr 17 23:40:56.390407 containerd[1996]: time="2026-04-17T23:40:56.389793418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:56.390407 containerd[1996]: time="2026-04-17T23:40:56.389871418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:56.390407 containerd[1996]: time="2026-04-17T23:40:56.389891381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:56.390407 containerd[1996]: time="2026-04-17T23:40:56.389997545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:56.415792 systemd[1]: Started cri-containerd-27b76316d6e6fbd293101efaf25e82800c51b6fd3cf979aa0042800550017d4e.scope - libcontainer container 27b76316d6e6fbd293101efaf25e82800c51b6fd3cf979aa0042800550017d4e. Apr 17 23:40:56.438814 systemd[1]: Started cri-containerd-f54ed0097ca03ff829bf409ceda871561a5c3fad18b21c6db119a4cf36cb32aa.scope - libcontainer container f54ed0097ca03ff829bf409ceda871561a5c3fad18b21c6db119a4cf36cb32aa. Apr 17 23:40:56.487634 containerd[1996]: time="2026-04-17T23:40:56.487574459Z" level=info msg="StartContainer for \"f54ed0097ca03ff829bf409ceda871561a5c3fad18b21c6db119a4cf36cb32aa\" returns successfully" Apr 17 23:40:56.525476 containerd[1996]: time="2026-04-17T23:40:56.524806412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-mrrkn,Uid:cd6b81b9-a826-423a-904e-616b07f2144c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27b76316d6e6fbd293101efaf25e82800c51b6fd3cf979aa0042800550017d4e\"" Apr 17 23:40:56.531441 update_engine[1964]: I20260417 23:40:56.529328 1964 update_attempter.cc:509] Updating boot flags... Apr 17 23:40:56.548731 containerd[1996]: time="2026-04-17T23:40:56.548457070Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:40:56.649631 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3394) Apr 17 23:40:56.911614 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3398) Apr 17 23:40:57.138896 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3398) Apr 17 23:40:58.018972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322651545.mount: Deactivated successfully. Apr 17 23:40:58.279683 kubelet[3208]: I0417 23:40:58.279518 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lszsk" podStartSLOduration=3.279498338 podStartE2EDuration="3.279498338s" podCreationTimestamp="2026-04-17 23:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:56.70773801 +0000 UTC m=+7.281992462" watchObservedRunningTime="2026-04-17 23:40:58.279498338 +0000 UTC m=+8.853752790" Apr 17 23:41:01.454778 containerd[1996]: time="2026-04-17T23:41:01.454722153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:01.456958 containerd[1996]: time="2026-04-17T23:41:01.456743700Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:41:01.460088 containerd[1996]: time="2026-04-17T23:41:01.459828713Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:01.464458 containerd[1996]: time="2026-04-17T23:41:01.464407208Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:01.465827 containerd[1996]: time="2026-04-17T23:41:01.465769875Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.917265201s" Apr 17 23:41:01.466056 containerd[1996]: time="2026-04-17T23:41:01.466033547Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:41:01.473645 containerd[1996]: time="2026-04-17T23:41:01.473579517Z" level=info msg="CreateContainer within sandbox \"27b76316d6e6fbd293101efaf25e82800c51b6fd3cf979aa0042800550017d4e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:41:01.500183 containerd[1996]: time="2026-04-17T23:41:01.500094760Z" level=info msg="CreateContainer within sandbox \"27b76316d6e6fbd293101efaf25e82800c51b6fd3cf979aa0042800550017d4e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4\"" Apr 17 23:41:01.501063 containerd[1996]: time="2026-04-17T23:41:01.501025527Z" level=info msg="StartContainer for \"f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4\"" Apr 17 23:41:01.539513 systemd[1]: run-containerd-runc-k8s.io-f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4-runc.Foul9C.mount: Deactivated successfully. Apr 17 23:41:01.547051 systemd[1]: Started cri-containerd-f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4.scope - libcontainer container f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4. Apr 17 23:41:01.584631 containerd[1996]: time="2026-04-17T23:41:01.583235466Z" level=info msg="StartContainer for \"f626ce839207a1f236cdc161d9e1551851fb39b3981440aeee9090008b04aad4\" returns successfully" Apr 17 23:41:04.906585 kubelet[3208]: I0417 23:41:04.906509 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-mrrkn" podStartSLOduration=4.987193678 podStartE2EDuration="9.90648886s" podCreationTimestamp="2026-04-17 23:40:55 +0000 UTC" firstStartedPulling="2026-04-17 23:40:56.547889843 +0000 UTC m=+7.122144275" lastFinishedPulling="2026-04-17 23:41:01.467185024 +0000 UTC m=+12.041439457" observedRunningTime="2026-04-17 23:41:01.729413343 +0000 UTC m=+12.303667796" watchObservedRunningTime="2026-04-17 23:41:04.90648886 +0000 UTC m=+15.480743312" Apr 17 23:41:08.653829 sudo[2320]: pam_unix(sudo:session): session closed for user root Apr 17 23:41:08.822126 sshd[2302]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:08.829394 systemd[1]: sshd@6-172.31.24.87:22-20.229.252.112:51700.service: Deactivated successfully. Apr 17 23:41:08.837398 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:41:08.837743 systemd[1]: session-7.scope: Consumed 3.747s CPU time, 153.2M memory peak, 0B memory swap peak. Apr 17 23:41:08.839673 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:41:08.841180 systemd-logind[1962]: Removed session 7. Apr 17 23:41:11.510901 systemd[1]: Created slice kubepods-besteffort-pod3d4eb031_4095_4a17_a513_1b10a029ef79.slice - libcontainer container kubepods-besteffort-pod3d4eb031_4095_4a17_a513_1b10a029ef79.slice. Apr 17 23:41:11.642457 kubelet[3208]: I0417 23:41:11.642286 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28b2r\" (UniqueName: \"kubernetes.io/projected/3d4eb031-4095-4a17-a513-1b10a029ef79-kube-api-access-28b2r\") pod \"calico-typha-bd6548f7-swr5k\" (UID: \"3d4eb031-4095-4a17-a513-1b10a029ef79\") " pod="calico-system/calico-typha-bd6548f7-swr5k" Apr 17 23:41:11.642457 kubelet[3208]: I0417 23:41:11.642333 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d4eb031-4095-4a17-a513-1b10a029ef79-typha-certs\") pod \"calico-typha-bd6548f7-swr5k\" (UID: \"3d4eb031-4095-4a17-a513-1b10a029ef79\") " pod="calico-system/calico-typha-bd6548f7-swr5k" Apr 17 23:41:11.642457 kubelet[3208]: I0417 23:41:11.642354 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d4eb031-4095-4a17-a513-1b10a029ef79-tigera-ca-bundle\") pod \"calico-typha-bd6548f7-swr5k\" (UID: \"3d4eb031-4095-4a17-a513-1b10a029ef79\") " pod="calico-system/calico-typha-bd6548f7-swr5k" Apr 17 23:41:11.664009 systemd[1]: Created slice kubepods-besteffort-podb06af313_4a3a_4a36_a3fc_4f9f4ba3c530.slice - libcontainer container kubepods-besteffort-podb06af313_4a3a_4a36_a3fc_4f9f4ba3c530.slice. Apr 17 23:41:11.743272 kubelet[3208]: I0417 23:41:11.743206 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-policysync\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743272 kubelet[3208]: I0417 23:41:11.743250 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfsn\" (UniqueName: \"kubernetes.io/projected/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-kube-api-access-spfsn\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743493 kubelet[3208]: I0417 23:41:11.743319 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-tigera-ca-bundle\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743493 kubelet[3208]: I0417 23:41:11.743340 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-sys-fs\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743493 kubelet[3208]: I0417 23:41:11.743374 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-bpffs\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743493 kubelet[3208]: I0417 23:41:11.743397 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-nodeproc\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.743493 kubelet[3208]: I0417 23:41:11.743430 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-var-lib-calico\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.746832 kubelet[3208]: I0417 23:41:11.743452 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-cni-net-dir\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.746832 kubelet[3208]: I0417 23:41:11.743477 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-flexvol-driver-host\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.746832 kubelet[3208]: I0417 23:41:11.743518 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-xtables-lock\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.746832 kubelet[3208]: I0417 23:41:11.743541 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-lib-modules\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.746832 kubelet[3208]: I0417 23:41:11.743565 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-var-run-calico\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.747048 kubelet[3208]: I0417 23:41:11.743587 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-cni-bin-dir\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.747048 kubelet[3208]: I0417 23:41:11.743640 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-cni-log-dir\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.747048 kubelet[3208]: I0417 23:41:11.743667 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b06af313-4a3a-4a36-a3fc-4f9f4ba3c530-node-certs\") pod \"calico-node-bd5g9\" (UID: \"b06af313-4a3a-4a36-a3fc-4f9f4ba3c530\") " pod="calico-system/calico-node-bd5g9" Apr 17 23:41:11.780224 kubelet[3208]: E0417 23:41:11.777993 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:11.822007 containerd[1996]: time="2026-04-17T23:41:11.821964039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd6548f7-swr5k,Uid:3d4eb031-4095-4a17-a513-1b10a029ef79,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:11.846632 kubelet[3208]: I0417 23:41:11.843841 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8e90ce8-ae92-43fe-bead-1cd18e86c253-varrun\") pod \"csi-node-driver-jfd6r\" (UID: \"b8e90ce8-ae92-43fe-bead-1cd18e86c253\") " pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:11.846632 kubelet[3208]: I0417 23:41:11.843953 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8e90ce8-ae92-43fe-bead-1cd18e86c253-kubelet-dir\") pod \"csi-node-driver-jfd6r\" (UID: \"b8e90ce8-ae92-43fe-bead-1cd18e86c253\") " pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:11.846632 kubelet[3208]: I0417 23:41:11.843969 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8e90ce8-ae92-43fe-bead-1cd18e86c253-socket-dir\") pod \"csi-node-driver-jfd6r\" (UID: \"b8e90ce8-ae92-43fe-bead-1cd18e86c253\") " pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:11.846632 kubelet[3208]: I0417 23:41:11.844039 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9c5\" (UniqueName: \"kubernetes.io/projected/b8e90ce8-ae92-43fe-bead-1cd18e86c253-kube-api-access-8n9c5\") pod \"csi-node-driver-jfd6r\" (UID: \"b8e90ce8-ae92-43fe-bead-1cd18e86c253\") " pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:11.846632 kubelet[3208]: I0417 23:41:11.844057 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8e90ce8-ae92-43fe-bead-1cd18e86c253-registration-dir\") pod \"csi-node-driver-jfd6r\" (UID: \"b8e90ce8-ae92-43fe-bead-1cd18e86c253\") " pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:11.847450 kubelet[3208]: E0417 23:41:11.847425 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.847565 kubelet[3208]: W0417 23:41:11.847551 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.847657 kubelet[3208]: E0417 23:41:11.847645 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.857146 kubelet[3208]: E0417 23:41:11.854620 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.857146 kubelet[3208]: W0417 23:41:11.854716 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.857146 kubelet[3208]: E0417 23:41:11.854744 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.857146 kubelet[3208]: E0417 23:41:11.855705 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.857146 kubelet[3208]: W0417 23:41:11.855720 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.857146 kubelet[3208]: E0417 23:41:11.855743 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.857681 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.861208 kubelet[3208]: W0417 23:41:11.857698 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.857718 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.860812 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.861208 kubelet[3208]: W0417 23:41:11.860827 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.860847 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.861162 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.861208 kubelet[3208]: W0417 23:41:11.861173 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.861208 kubelet[3208]: E0417 23:41:11.861187 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.863648 kubelet[3208]: E0417 23:41:11.862799 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.863648 kubelet[3208]: W0417 23:41:11.862817 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.863648 kubelet[3208]: E0417 23:41:11.862833 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.863648 kubelet[3208]: E0417 23:41:11.863351 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.863648 kubelet[3208]: W0417 23:41:11.863364 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.863648 kubelet[3208]: E0417 23:41:11.863380 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.864709 kubelet[3208]: E0417 23:41:11.864419 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.864709 kubelet[3208]: W0417 23:41:11.864433 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.864709 kubelet[3208]: E0417 23:41:11.864447 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.865095 kubelet[3208]: E0417 23:41:11.864958 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.865095 kubelet[3208]: W0417 23:41:11.864973 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.865095 kubelet[3208]: E0417 23:41:11.864987 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.865629 kubelet[3208]: E0417 23:41:11.865471 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.865629 kubelet[3208]: W0417 23:41:11.865484 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.865629 kubelet[3208]: E0417 23:41:11.865497 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.866056 kubelet[3208]: E0417 23:41:11.865918 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.866056 kubelet[3208]: W0417 23:41:11.865930 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.866056 kubelet[3208]: E0417 23:41:11.865943 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.866493 kubelet[3208]: E0417 23:41:11.866356 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.866493 kubelet[3208]: W0417 23:41:11.866370 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.866493 kubelet[3208]: E0417 23:41:11.866383 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.866956 kubelet[3208]: E0417 23:41:11.866835 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.866956 kubelet[3208]: W0417 23:41:11.866848 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.866956 kubelet[3208]: E0417 23:41:11.866862 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.867374 kubelet[3208]: E0417 23:41:11.867239 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.867374 kubelet[3208]: W0417 23:41:11.867252 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.867374 kubelet[3208]: E0417 23:41:11.867264 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.867893 kubelet[3208]: E0417 23:41:11.867755 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.867893 kubelet[3208]: W0417 23:41:11.867768 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.867893 kubelet[3208]: E0417 23:41:11.867782 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.868378 kubelet[3208]: E0417 23:41:11.868192 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.868378 kubelet[3208]: W0417 23:41:11.868204 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.868378 kubelet[3208]: E0417 23:41:11.868217 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.868794 kubelet[3208]: E0417 23:41:11.868580 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.868794 kubelet[3208]: W0417 23:41:11.868607 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.868794 kubelet[3208]: E0417 23:41:11.868620 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.869093 kubelet[3208]: E0417 23:41:11.869081 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.869194 kubelet[3208]: W0417 23:41:11.869164 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.869194 kubelet[3208]: E0417 23:41:11.869180 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.869758 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.872157 kubelet[3208]: W0417 23:41:11.869774 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.869788 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.870069 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.872157 kubelet[3208]: W0417 23:41:11.870079 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.870090 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.871930 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.872157 kubelet[3208]: W0417 23:41:11.871943 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.872157 kubelet[3208]: E0417 23:41:11.871957 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.872658 kubelet[3208]: E0417 23:41:11.872645 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.872773 kubelet[3208]: W0417 23:41:11.872727 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.872773 kubelet[3208]: E0417 23:41:11.872749 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.873301 kubelet[3208]: E0417 23:41:11.873107 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.873301 kubelet[3208]: W0417 23:41:11.873118 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.873301 kubelet[3208]: E0417 23:41:11.873130 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.874063 kubelet[3208]: E0417 23:41:11.873952 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.874063 kubelet[3208]: W0417 23:41:11.873970 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.874063 kubelet[3208]: E0417 23:41:11.873985 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.875094 kubelet[3208]: E0417 23:41:11.875080 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.875198 kubelet[3208]: W0417 23:41:11.875186 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.875440 kubelet[3208]: E0417 23:41:11.875247 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.876224 kubelet[3208]: E0417 23:41:11.876184 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.876224 kubelet[3208]: W0417 23:41:11.876198 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.877685 kubelet[3208]: E0417 23:41:11.876447 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.879049 kubelet[3208]: E0417 23:41:11.879033 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.879218 kubelet[3208]: W0417 23:41:11.879194 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.879300 kubelet[3208]: E0417 23:41:11.879287 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.881712 kubelet[3208]: E0417 23:41:11.881673 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.883366 kubelet[3208]: W0417 23:41:11.883087 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.883366 kubelet[3208]: E0417 23:41:11.883114 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.887317 kubelet[3208]: E0417 23:41:11.883813 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.887317 kubelet[3208]: W0417 23:41:11.883828 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.887317 kubelet[3208]: E0417 23:41:11.883843 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.902668 kubelet[3208]: E0417 23:41:11.902340 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.902668 kubelet[3208]: W0417 23:41:11.902360 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.902668 kubelet[3208]: E0417 23:41:11.902384 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.905116 kubelet[3208]: E0417 23:41:11.905086 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.905344 kubelet[3208]: W0417 23:41:11.905326 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.906017 kubelet[3208]: E0417 23:41:11.905996 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.945178 kubelet[3208]: E0417 23:41:11.945153 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.945315 kubelet[3208]: W0417 23:41:11.945303 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.945477 kubelet[3208]: E0417 23:41:11.945395 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.945768 kubelet[3208]: E0417 23:41:11.945751 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.945864 kubelet[3208]: W0417 23:41:11.945853 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.945941 kubelet[3208]: E0417 23:41:11.945927 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.946326 kubelet[3208]: E0417 23:41:11.946307 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.946398 kubelet[3208]: W0417 23:41:11.946339 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.946398 kubelet[3208]: E0417 23:41:11.946355 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.946740 kubelet[3208]: E0417 23:41:11.946724 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.946740 kubelet[3208]: W0417 23:41:11.946741 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.946854 kubelet[3208]: E0417 23:41:11.946754 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.947104 kubelet[3208]: E0417 23:41:11.947090 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.947104 kubelet[3208]: W0417 23:41:11.947105 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.947213 kubelet[3208]: E0417 23:41:11.947118 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.947605 kubelet[3208]: E0417 23:41:11.947431 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.947605 kubelet[3208]: W0417 23:41:11.947445 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.947605 kubelet[3208]: E0417 23:41:11.947476 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.947821 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.948611 kubelet[3208]: W0417 23:41:11.947834 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.947861 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.948158 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.948611 kubelet[3208]: W0417 23:41:11.948190 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.948204 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.948470 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.948611 kubelet[3208]: W0417 23:41:11.948482 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.948611 kubelet[3208]: E0417 23:41:11.948496 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.949032 kubelet[3208]: E0417 23:41:11.948773 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.949032 kubelet[3208]: W0417 23:41:11.948783 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.949032 kubelet[3208]: E0417 23:41:11.948796 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.949161 kubelet[3208]: E0417 23:41:11.949045 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.949161 kubelet[3208]: W0417 23:41:11.949055 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.949161 kubelet[3208]: E0417 23:41:11.949066 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.949327 kubelet[3208]: E0417 23:41:11.949304 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.949327 kubelet[3208]: W0417 23:41:11.949323 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.949440 kubelet[3208]: E0417 23:41:11.949335 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.949813 kubelet[3208]: E0417 23:41:11.949585 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.949813 kubelet[3208]: W0417 23:41:11.949618 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.949813 kubelet[3208]: E0417 23:41:11.949631 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.949993 kubelet[3208]: E0417 23:41:11.949891 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.949993 kubelet[3208]: W0417 23:41:11.949901 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.949993 kubelet[3208]: E0417 23:41:11.949914 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.950414 kubelet[3208]: E0417 23:41:11.950223 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.950414 kubelet[3208]: W0417 23:41:11.950239 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.950414 kubelet[3208]: E0417 23:41:11.950254 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.950576 kubelet[3208]: E0417 23:41:11.950526 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.950576 kubelet[3208]: W0417 23:41:11.950537 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.950576 kubelet[3208]: E0417 23:41:11.950549 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.950909 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.951937 kubelet[3208]: W0417 23:41:11.950922 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.950936 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.951202 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.951937 kubelet[3208]: W0417 23:41:11.951223 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.951236 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.951523 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.951937 kubelet[3208]: W0417 23:41:11.951534 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.951547 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.951937 kubelet[3208]: E0417 23:41:11.951892 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.952610 kubelet[3208]: W0417 23:41:11.951904 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.952610 kubelet[3208]: E0417 23:41:11.951926 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.952610 kubelet[3208]: E0417 23:41:11.952331 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.952610 kubelet[3208]: W0417 23:41:11.952342 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.952610 kubelet[3208]: E0417 23:41:11.952355 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.953130 kubelet[3208]: E0417 23:41:11.952695 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.953130 kubelet[3208]: W0417 23:41:11.952705 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.953130 kubelet[3208]: E0417 23:41:11.952718 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.953130 kubelet[3208]: E0417 23:41:11.952996 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.953130 kubelet[3208]: W0417 23:41:11.953006 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.953130 kubelet[3208]: E0417 23:41:11.953018 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.953794 kubelet[3208]: E0417 23:41:11.953314 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.953794 kubelet[3208]: W0417 23:41:11.953325 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.953794 kubelet[3208]: E0417 23:41:11.953337 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.953794 kubelet[3208]: E0417 23:41:11.953649 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.953794 kubelet[3208]: W0417 23:41:11.953676 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.953794 kubelet[3208]: E0417 23:41:11.953689 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.970370 kubelet[3208]: E0417 23:41:11.970335 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:11.970370 kubelet[3208]: W0417 23:41:11.970361 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:11.972368 kubelet[3208]: E0417 23:41:11.970385 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:11.972426 containerd[1996]: time="2026-04-17T23:41:11.970680966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:11.972426 containerd[1996]: time="2026-04-17T23:41:11.970746773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:11.972426 containerd[1996]: time="2026-04-17T23:41:11.970776087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:11.972426 containerd[1996]: time="2026-04-17T23:41:11.970905057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:11.973915 containerd[1996]: time="2026-04-17T23:41:11.973422149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd5g9,Uid:b06af313-4a3a-4a36-a3fc-4f9f4ba3c530,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:11.994790 systemd[1]: Started cri-containerd-2f18ff7a2722f817a21aa10404e366c435068687d51441ffabcdf880a9547a70.scope - libcontainer container 2f18ff7a2722f817a21aa10404e366c435068687d51441ffabcdf880a9547a70. Apr 17 23:41:12.023812 containerd[1996]: time="2026-04-17T23:41:12.020863091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:12.024566 containerd[1996]: time="2026-04-17T23:41:12.024265398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:12.024753 containerd[1996]: time="2026-04-17T23:41:12.024547901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:12.025029 containerd[1996]: time="2026-04-17T23:41:12.024963800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:12.053862 systemd[1]: Started cri-containerd-9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f.scope - libcontainer container 9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f. Apr 17 23:41:12.081249 containerd[1996]: time="2026-04-17T23:41:12.080265141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd6548f7-swr5k,Uid:3d4eb031-4095-4a17-a513-1b10a029ef79,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f18ff7a2722f817a21aa10404e366c435068687d51441ffabcdf880a9547a70\"" Apr 17 23:41:12.087424 containerd[1996]: time="2026-04-17T23:41:12.087380902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:41:12.108072 containerd[1996]: time="2026-04-17T23:41:12.108028046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd5g9,Uid:b06af313-4a3a-4a36-a3fc-4f9f4ba3c530,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\"" Apr 17 23:41:13.584152 kubelet[3208]: E0417 23:41:13.583346 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:13.705640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883801968.mount: Deactivated successfully. Apr 17 23:41:14.762409 containerd[1996]: time="2026-04-17T23:41:14.762359647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:14.763748 containerd[1996]: time="2026-04-17T23:41:14.763606364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:41:14.766461 containerd[1996]: time="2026-04-17T23:41:14.764813377Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:14.767731 containerd[1996]: time="2026-04-17T23:41:14.767696869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:14.768421 containerd[1996]: time="2026-04-17T23:41:14.768384920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.68095143s" Apr 17 23:41:14.768497 containerd[1996]: time="2026-04-17T23:41:14.768428237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:41:14.769665 containerd[1996]: time="2026-04-17T23:41:14.769641042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:41:14.791102 containerd[1996]: time="2026-04-17T23:41:14.791059066Z" level=info msg="CreateContainer within sandbox \"2f18ff7a2722f817a21aa10404e366c435068687d51441ffabcdf880a9547a70\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:41:14.827172 containerd[1996]: time="2026-04-17T23:41:14.827120398Z" level=info msg="CreateContainer within sandbox \"2f18ff7a2722f817a21aa10404e366c435068687d51441ffabcdf880a9547a70\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f96c921a1f81b9812b525088fa0bbf4af7fc9f6084e2f1ebd670a2a075b0f830\"" Apr 17 23:41:14.828052 containerd[1996]: time="2026-04-17T23:41:14.828014739Z" level=info msg="StartContainer for \"f96c921a1f81b9812b525088fa0bbf4af7fc9f6084e2f1ebd670a2a075b0f830\"" Apr 17 23:41:14.903832 systemd[1]: Started cri-containerd-f96c921a1f81b9812b525088fa0bbf4af7fc9f6084e2f1ebd670a2a075b0f830.scope - libcontainer container f96c921a1f81b9812b525088fa0bbf4af7fc9f6084e2f1ebd670a2a075b0f830. Apr 17 23:41:14.960804 containerd[1996]: time="2026-04-17T23:41:14.960757158Z" level=info msg="StartContainer for \"f96c921a1f81b9812b525088fa0bbf4af7fc9f6084e2f1ebd670a2a075b0f830\" returns successfully" Apr 17 23:41:15.589830 kubelet[3208]: E0417 23:41:15.589775 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:15.825625 kubelet[3208]: E0417 23:41:15.825555 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.825625 kubelet[3208]: W0417 23:41:15.825586 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.825874 kubelet[3208]: E0417 23:41:15.825645 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.826130 kubelet[3208]: E0417 23:41:15.825949 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.826130 kubelet[3208]: W0417 23:41:15.825962 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.826328 kubelet[3208]: E0417 23:41:15.826127 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.826430 kubelet[3208]: E0417 23:41:15.826402 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.826510 kubelet[3208]: W0417 23:41:15.826432 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.826510 kubelet[3208]: E0417 23:41:15.826448 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.826785 kubelet[3208]: E0417 23:41:15.826763 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.826785 kubelet[3208]: W0417 23:41:15.826780 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.826927 kubelet[3208]: E0417 23:41:15.826796 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.827115 kubelet[3208]: E0417 23:41:15.827094 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.827115 kubelet[3208]: W0417 23:41:15.827112 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.827254 kubelet[3208]: E0417 23:41:15.827127 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.827373 kubelet[3208]: E0417 23:41:15.827355 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.827373 kubelet[3208]: W0417 23:41:15.827370 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.827656 kubelet[3208]: E0417 23:41:15.827387 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.827656 kubelet[3208]: E0417 23:41:15.827642 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.827656 kubelet[3208]: W0417 23:41:15.827653 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.827888 kubelet[3208]: E0417 23:41:15.827667 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.827940 kubelet[3208]: E0417 23:41:15.827894 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.827940 kubelet[3208]: W0417 23:41:15.827904 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.827940 kubelet[3208]: E0417 23:41:15.827917 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.828167 kubelet[3208]: E0417 23:41:15.828150 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.828167 kubelet[3208]: W0417 23:41:15.828165 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.828332 kubelet[3208]: E0417 23:41:15.828178 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.828415 kubelet[3208]: E0417 23:41:15.828394 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.828415 kubelet[3208]: W0417 23:41:15.828405 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.828565 kubelet[3208]: E0417 23:41:15.828417 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.828682 kubelet[3208]: E0417 23:41:15.828664 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.828682 kubelet[3208]: W0417 23:41:15.828675 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.828838 kubelet[3208]: E0417 23:41:15.828689 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.828917 kubelet[3208]: E0417 23:41:15.828909 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.828961 kubelet[3208]: W0417 23:41:15.828922 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.828961 kubelet[3208]: E0417 23:41:15.828935 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.829194 kubelet[3208]: E0417 23:41:15.829174 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.829194 kubelet[3208]: W0417 23:41:15.829190 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.829352 kubelet[3208]: E0417 23:41:15.829205 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.829461 kubelet[3208]: E0417 23:41:15.829444 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.829461 kubelet[3208]: W0417 23:41:15.829458 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.829634 kubelet[3208]: E0417 23:41:15.829472 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.829743 kubelet[3208]: E0417 23:41:15.829726 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.829805 kubelet[3208]: W0417 23:41:15.829742 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.829805 kubelet[3208]: E0417 23:41:15.829756 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.875306 kubelet[3208]: E0417 23:41:15.875189 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.875306 kubelet[3208]: W0417 23:41:15.875214 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.875306 kubelet[3208]: E0417 23:41:15.875237 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.876006 kubelet[3208]: E0417 23:41:15.875946 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.876006 kubelet[3208]: W0417 23:41:15.875965 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.876006 kubelet[3208]: E0417 23:41:15.875984 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.876725 kubelet[3208]: E0417 23:41:15.876554 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.876725 kubelet[3208]: W0417 23:41:15.876568 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.876725 kubelet[3208]: E0417 23:41:15.876582 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.877165 kubelet[3208]: E0417 23:41:15.877063 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.877165 kubelet[3208]: W0417 23:41:15.877076 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.877165 kubelet[3208]: E0417 23:41:15.877089 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.877769 kubelet[3208]: E0417 23:41:15.877568 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.877769 kubelet[3208]: W0417 23:41:15.877587 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.877769 kubelet[3208]: E0417 23:41:15.877649 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.878250 kubelet[3208]: E0417 23:41:15.878230 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.878250 kubelet[3208]: W0417 23:41:15.878245 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.878387 kubelet[3208]: E0417 23:41:15.878259 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.878542 kubelet[3208]: E0417 23:41:15.878524 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.878542 kubelet[3208]: W0417 23:41:15.878538 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.878744 kubelet[3208]: E0417 23:41:15.878553 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.878823 kubelet[3208]: E0417 23:41:15.878805 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.878823 kubelet[3208]: W0417 23:41:15.878816 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.878823 kubelet[3208]: E0417 23:41:15.878828 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.879091 kubelet[3208]: E0417 23:41:15.879074 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.879091 kubelet[3208]: W0417 23:41:15.879088 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.879508 kubelet[3208]: E0417 23:41:15.879101 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.879508 kubelet[3208]: E0417 23:41:15.879308 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.879508 kubelet[3208]: W0417 23:41:15.879318 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.879508 kubelet[3208]: E0417 23:41:15.879330 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.879752 kubelet[3208]: E0417 23:41:15.879527 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.879752 kubelet[3208]: W0417 23:41:15.879537 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.879752 kubelet[3208]: E0417 23:41:15.879549 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.879887 kubelet[3208]: E0417 23:41:15.879758 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.879887 kubelet[3208]: W0417 23:41:15.879767 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.879887 kubelet[3208]: E0417 23:41:15.879781 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.880211 kubelet[3208]: E0417 23:41:15.880008 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.880211 kubelet[3208]: W0417 23:41:15.880017 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.880211 kubelet[3208]: E0417 23:41:15.880029 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.880544 kubelet[3208]: E0417 23:41:15.880522 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.880544 kubelet[3208]: W0417 23:41:15.880542 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.880686 kubelet[3208]: E0417 23:41:15.880556 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.880843 kubelet[3208]: E0417 23:41:15.880825 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.880843 kubelet[3208]: W0417 23:41:15.880839 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.880956 kubelet[3208]: E0417 23:41:15.880853 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.881163 kubelet[3208]: E0417 23:41:15.881145 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.881222 kubelet[3208]: W0417 23:41:15.881158 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.881222 kubelet[3208]: E0417 23:41:15.881197 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.881687 kubelet[3208]: E0417 23:41:15.881669 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.881687 kubelet[3208]: W0417 23:41:15.881684 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.881816 kubelet[3208]: E0417 23:41:15.881698 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:15.881943 kubelet[3208]: E0417 23:41:15.881928 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:41:15.881943 kubelet[3208]: W0417 23:41:15.881941 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:41:15.882101 kubelet[3208]: E0417 23:41:15.881953 3208 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:41:16.127624 containerd[1996]: time="2026-04-17T23:41:16.127480578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:16.129920 containerd[1996]: time="2026-04-17T23:41:16.129868133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:41:16.130631 containerd[1996]: time="2026-04-17T23:41:16.130577736Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:16.133149 containerd[1996]: time="2026-04-17T23:41:16.133094462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:16.134136 containerd[1996]: time="2026-04-17T23:41:16.133844818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.362944259s" Apr 17 23:41:16.134136 containerd[1996]: time="2026-04-17T23:41:16.133887747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:41:16.139361 containerd[1996]: time="2026-04-17T23:41:16.139314212Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:41:16.169847 containerd[1996]: time="2026-04-17T23:41:16.169793806Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea\"" Apr 17 23:41:16.171900 containerd[1996]: time="2026-04-17T23:41:16.171861407Z" level=info msg="StartContainer for \"78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea\"" Apr 17 23:41:16.217805 systemd[1]: Started cri-containerd-78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea.scope - libcontainer container 78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea. Apr 17 23:41:16.249754 containerd[1996]: time="2026-04-17T23:41:16.249443233Z" level=info msg="StartContainer for \"78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea\" returns successfully" Apr 17 23:41:16.263255 systemd[1]: cri-containerd-78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea.scope: Deactivated successfully. Apr 17 23:41:16.383478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea-rootfs.mount: Deactivated successfully. Apr 17 23:41:16.513775 containerd[1996]: time="2026-04-17T23:41:16.491534075Z" level=info msg="shim disconnected" id=78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea namespace=k8s.io Apr 17 23:41:16.514091 containerd[1996]: time="2026-04-17T23:41:16.513774107Z" level=warning msg="cleaning up after shim disconnected" id=78ebb5852b78548b0945e0dedc36f389b0b665558badf5bf4037913fcf745aea namespace=k8s.io Apr 17 23:41:16.514091 containerd[1996]: time="2026-04-17T23:41:16.513791941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:41:16.729224 kubelet[3208]: I0417 23:41:16.729199 3208 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:41:16.732922 containerd[1996]: time="2026-04-17T23:41:16.732886634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:41:16.753322 kubelet[3208]: I0417 23:41:16.752725 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-bd6548f7-swr5k" podStartSLOduration=3.068105585 podStartE2EDuration="5.752704749s" podCreationTimestamp="2026-04-17 23:41:11 +0000 UTC" firstStartedPulling="2026-04-17 23:41:12.084846106 +0000 UTC m=+22.659100537" lastFinishedPulling="2026-04-17 23:41:14.769445255 +0000 UTC m=+25.343699701" observedRunningTime="2026-04-17 23:41:15.740282149 +0000 UTC m=+26.314536600" watchObservedRunningTime="2026-04-17 23:41:16.752704749 +0000 UTC m=+27.326959204" Apr 17 23:41:17.583890 kubelet[3208]: E0417 23:41:17.583851 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:19.582632 kubelet[3208]: E0417 23:41:19.581762 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:21.582627 kubelet[3208]: E0417 23:41:21.581480 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:23.583073 kubelet[3208]: E0417 23:41:23.582955 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:24.236099 kubelet[3208]: I0417 23:41:24.235581 3208 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:41:25.585031 kubelet[3208]: E0417 23:41:25.584746 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:26.638716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784617026.mount: Deactivated successfully. Apr 17 23:41:26.701122 containerd[1996]: time="2026-04-17T23:41:26.700766211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:41:26.701771 containerd[1996]: time="2026-04-17T23:41:26.694292659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:26.707647 containerd[1996]: time="2026-04-17T23:41:26.707584498Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:26.713333 containerd[1996]: time="2026-04-17T23:41:26.712294167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:26.713333 containerd[1996]: time="2026-04-17T23:41:26.713154301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.980227115s" Apr 17 23:41:26.713333 containerd[1996]: time="2026-04-17T23:41:26.713197675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:41:26.728590 containerd[1996]: time="2026-04-17T23:41:26.728546404Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:41:26.783707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975814952.mount: Deactivated successfully. Apr 17 23:41:26.807841 containerd[1996]: time="2026-04-17T23:41:26.807790324Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b\"" Apr 17 23:41:26.809150 containerd[1996]: time="2026-04-17T23:41:26.809114816Z" level=info msg="StartContainer for \"b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b\"" Apr 17 23:41:26.887076 systemd[1]: Started cri-containerd-b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b.scope - libcontainer container b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b. Apr 17 23:41:26.947107 containerd[1996]: time="2026-04-17T23:41:26.947055617Z" level=info msg="StartContainer for \"b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b\" returns successfully" Apr 17 23:41:26.990925 systemd[1]: cri-containerd-b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b.scope: Deactivated successfully. Apr 17 23:41:27.040918 containerd[1996]: time="2026-04-17T23:41:27.040834335Z" level=info msg="shim disconnected" id=b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b namespace=k8s.io Apr 17 23:41:27.040918 containerd[1996]: time="2026-04-17T23:41:27.040910873Z" level=warning msg="cleaning up after shim disconnected" id=b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b namespace=k8s.io Apr 17 23:41:27.040918 containerd[1996]: time="2026-04-17T23:41:27.040923268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:41:27.583038 kubelet[3208]: E0417 23:41:27.581588 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:27.639465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1fa825b582c3f9e5cca21d7a0b23e06807799bb8ed1453408242ef113c8f55b-rootfs.mount: Deactivated successfully. Apr 17 23:41:27.772727 containerd[1996]: time="2026-04-17T23:41:27.772656655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:41:29.583323 kubelet[3208]: E0417 23:41:29.583040 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:31.387690 containerd[1996]: time="2026-04-17T23:41:31.387637052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:31.389793 containerd[1996]: time="2026-04-17T23:41:31.389736975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:41:31.392232 containerd[1996]: time="2026-04-17T23:41:31.392191595Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:31.395803 containerd[1996]: time="2026-04-17T23:41:31.395497822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:31.396408 containerd[1996]: time="2026-04-17T23:41:31.396371605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.623645788s" Apr 17 23:41:31.396497 containerd[1996]: time="2026-04-17T23:41:31.396412583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:41:31.403214 containerd[1996]: time="2026-04-17T23:41:31.403176098Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:41:31.429561 containerd[1996]: time="2026-04-17T23:41:31.429494008Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747\"" Apr 17 23:41:31.430544 containerd[1996]: time="2026-04-17T23:41:31.430506446Z" level=info msg="StartContainer for \"e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747\"" Apr 17 23:41:31.471822 systemd[1]: Started cri-containerd-e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747.scope - libcontainer container e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747. Apr 17 23:41:31.510578 containerd[1996]: time="2026-04-17T23:41:31.510531444Z" level=info msg="StartContainer for \"e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747\" returns successfully" Apr 17 23:41:31.583523 kubelet[3208]: E0417 23:41:31.583090 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:32.430095 systemd[1]: cri-containerd-e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747.scope: Deactivated successfully. Apr 17 23:41:32.471633 kubelet[3208]: I0417 23:41:32.459022 3208 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:41:32.477002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747-rootfs.mount: Deactivated successfully. Apr 17 23:41:32.500183 containerd[1996]: time="2026-04-17T23:41:32.500109731Z" level=info msg="shim disconnected" id=e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747 namespace=k8s.io Apr 17 23:41:32.500183 containerd[1996]: time="2026-04-17T23:41:32.500189361Z" level=warning msg="cleaning up after shim disconnected" id=e486e310e989180df13642962595c2a56f463f242f8952d372b2d769b71a0747 namespace=k8s.io Apr 17 23:41:32.500183 containerd[1996]: time="2026-04-17T23:41:32.500205123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:41:32.718226 kubelet[3208]: I0417 23:41:32.718110 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c163f60a-efa5-4d8b-876e-506e8f185561-calico-apiserver-certs\") pod \"calico-apiserver-845ff8dcf7-qfvhl\" (UID: \"c163f60a-efa5-4d8b-876e-506e8f185561\") " pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" Apr 17 23:41:32.718226 kubelet[3208]: I0417 23:41:32.718174 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97gzt\" (UniqueName: \"kubernetes.io/projected/c163f60a-efa5-4d8b-876e-506e8f185561-kube-api-access-97gzt\") pod \"calico-apiserver-845ff8dcf7-qfvhl\" (UID: \"c163f60a-efa5-4d8b-876e-506e8f185561\") " pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" Apr 17 23:41:32.769367 systemd[1]: Created slice kubepods-besteffort-pod1550559b_67fa_454a_904e_63a2b926f540.slice - libcontainer container kubepods-besteffort-pod1550559b_67fa_454a_904e_63a2b926f540.slice. Apr 17 23:41:32.770325 systemd[1]: Created slice kubepods-besteffort-podc163f60a_efa5_4d8b_876e_506e8f185561.slice - libcontainer container kubepods-besteffort-podc163f60a_efa5_4d8b_876e_506e8f185561.slice. Apr 17 23:41:32.773924 systemd[1]: Created slice kubepods-besteffort-podd03f1c1f_c3ff_4e65_bb8e_b95383713725.slice - libcontainer container kubepods-besteffort-podd03f1c1f_c3ff_4e65_bb8e_b95383713725.slice. Apr 17 23:41:32.783474 systemd[1]: Created slice kubepods-burstable-pode385655f_a6bc_4d82_b34f_39a693e4f020.slice - libcontainer container kubepods-burstable-pode385655f_a6bc_4d82_b34f_39a693e4f020.slice. Apr 17 23:41:32.796420 systemd[1]: Created slice kubepods-besteffort-pod7681fd73_755e_41c9_ae03_ddbe78c2fcd5.slice - libcontainer container kubepods-besteffort-pod7681fd73_755e_41c9_ae03_ddbe78c2fcd5.slice. Apr 17 23:41:32.815409 systemd[1]: Created slice kubepods-burstable-pod0352e4c9_6574_48ee_8802_b8c6cbb3a9cc.slice - libcontainer container kubepods-burstable-pod0352e4c9_6574_48ee_8802_b8c6cbb3a9cc.slice. Apr 17 23:41:32.819359 kubelet[3208]: I0417 23:41:32.819180 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1550559b-67fa-454a-904e-63a2b926f540-whisker-backend-key-pair\") pod \"whisker-759f44599-zm69g\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:32.819359 kubelet[3208]: I0417 23:41:32.819231 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hft\" (UniqueName: \"kubernetes.io/projected/1550559b-67fa-454a-904e-63a2b926f540-kube-api-access-j9hft\") pod \"whisker-759f44599-zm69g\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:32.819359 kubelet[3208]: I0417 23:41:32.819256 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f1c1f-c3ff-4e65-bb8e-b95383713725-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-rfp8t\" (UID: \"d03f1c1f-c3ff-4e65-bb8e-b95383713725\") " pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:32.820633 kubelet[3208]: I0417 23:41:32.820121 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66zjc\" (UniqueName: \"kubernetes.io/projected/7681fd73-755e-41c9-ae03-ddbe78c2fcd5-kube-api-access-66zjc\") pod \"calico-kube-controllers-7ff76c96b9-qlvcl\" (UID: \"7681fd73-755e-41c9-ae03-ddbe78c2fcd5\") " pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" Apr 17 23:41:32.821365 kubelet[3208]: I0417 23:41:32.820370 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e385655f-a6bc-4d82-b34f-39a693e4f020-config-volume\") pod \"coredns-7d764666f9-x8pnz\" (UID: \"e385655f-a6bc-4d82-b34f-39a693e4f020\") " pod="kube-system/coredns-7d764666f9-x8pnz" Apr 17 23:41:32.821475 kubelet[3208]: I0417 23:41:32.821406 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74bw\" (UniqueName: \"kubernetes.io/projected/e385655f-a6bc-4d82-b34f-39a693e4f020-kube-api-access-f74bw\") pod \"coredns-7d764666f9-x8pnz\" (UID: \"e385655f-a6bc-4d82-b34f-39a693e4f020\") " pod="kube-system/coredns-7d764666f9-x8pnz" Apr 17 23:41:32.821475 kubelet[3208]: I0417 23:41:32.821439 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjsj\" (UniqueName: \"kubernetes.io/projected/0352e4c9-6574-48ee-8802-b8c6cbb3a9cc-kube-api-access-sjjsj\") pod \"coredns-7d764666f9-tkjr8\" (UID: \"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc\") " pod="kube-system/coredns-7d764666f9-tkjr8" Apr 17 23:41:32.821602 kubelet[3208]: I0417 23:41:32.821557 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzf8z\" (UniqueName: \"kubernetes.io/projected/c35def7d-4496-4fa2-b60e-d4ff32612dae-kube-api-access-kzf8z\") pod \"calico-apiserver-845ff8dcf7-2f8gg\" (UID: \"c35def7d-4496-4fa2-b60e-d4ff32612dae\") " pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" Apr 17 23:41:32.821669 kubelet[3208]: I0417 23:41:32.821620 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d03f1c1f-c3ff-4e65-bb8e-b95383713725-goldmane-key-pair\") pod \"goldmane-9f7667bb8-rfp8t\" (UID: \"d03f1c1f-c3ff-4e65-bb8e-b95383713725\") " pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:32.821669 kubelet[3208]: I0417 23:41:32.821646 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0352e4c9-6574-48ee-8802-b8c6cbb3a9cc-config-volume\") pod \"coredns-7d764666f9-tkjr8\" (UID: \"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc\") " pod="kube-system/coredns-7d764666f9-tkjr8" Apr 17 23:41:32.821791 kubelet[3208]: I0417 23:41:32.821670 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7681fd73-755e-41c9-ae03-ddbe78c2fcd5-tigera-ca-bundle\") pod \"calico-kube-controllers-7ff76c96b9-qlvcl\" (UID: \"7681fd73-755e-41c9-ae03-ddbe78c2fcd5\") " pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" Apr 17 23:41:32.821791 kubelet[3208]: I0417 23:41:32.821721 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c35def7d-4496-4fa2-b60e-d4ff32612dae-calico-apiserver-certs\") pod \"calico-apiserver-845ff8dcf7-2f8gg\" (UID: \"c35def7d-4496-4fa2-b60e-d4ff32612dae\") " pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" Apr 17 23:41:32.821791 kubelet[3208]: I0417 23:41:32.821750 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-nginx-config\") pod \"whisker-759f44599-zm69g\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:32.821937 kubelet[3208]: I0417 23:41:32.821850 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-whisker-ca-bundle\") pod \"whisker-759f44599-zm69g\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:32.821983 kubelet[3208]: I0417 23:41:32.821963 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03f1c1f-c3ff-4e65-bb8e-b95383713725-config\") pod \"goldmane-9f7667bb8-rfp8t\" (UID: \"d03f1c1f-c3ff-4e65-bb8e-b95383713725\") " pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:32.822034 kubelet[3208]: I0417 23:41:32.821990 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6nz\" (UniqueName: \"kubernetes.io/projected/d03f1c1f-c3ff-4e65-bb8e-b95383713725-kube-api-access-2g6nz\") pod \"goldmane-9f7667bb8-rfp8t\" (UID: \"d03f1c1f-c3ff-4e65-bb8e-b95383713725\") " pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:32.842049 systemd[1]: Created slice kubepods-besteffort-podc35def7d_4496_4fa2_b60e_d4ff32612dae.slice - libcontainer container kubepods-besteffort-podc35def7d_4496_4fa2_b60e_d4ff32612dae.slice. Apr 17 23:41:32.896813 containerd[1996]: time="2026-04-17T23:41:32.896753422Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:41:32.928716 containerd[1996]: time="2026-04-17T23:41:32.928483685Z" level=info msg="CreateContainer within sandbox \"9fb9224b386c2fb8cef5f0b9bfc2b47a4b55ed00d03698e38b7dcb49e2f8ee6f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a\"" Apr 17 23:41:32.941662 containerd[1996]: time="2026-04-17T23:41:32.940007048Z" level=info msg="StartContainer for \"57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a\"" Apr 17 23:41:33.005945 systemd[1]: Started cri-containerd-57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a.scope - libcontainer container 57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a. Apr 17 23:41:33.050199 containerd[1996]: time="2026-04-17T23:41:33.050057653Z" level=info msg="StartContainer for \"57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a\" returns successfully" Apr 17 23:41:33.114676 containerd[1996]: time="2026-04-17T23:41:33.114633396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-x8pnz,Uid:e385655f-a6bc-4d82-b34f-39a693e4f020,Namespace:kube-system,Attempt:0,}" Apr 17 23:41:33.127489 containerd[1996]: time="2026-04-17T23:41:33.127369340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff76c96b9-qlvcl,Uid:7681fd73-755e-41c9-ae03-ddbe78c2fcd5,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.128145 containerd[1996]: time="2026-04-17T23:41:33.128110335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tkjr8,Uid:0352e4c9-6574-48ee-8802-b8c6cbb3a9cc,Namespace:kube-system,Attempt:0,}" Apr 17 23:41:33.131815 containerd[1996]: time="2026-04-17T23:41:33.131774065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-qfvhl,Uid:c163f60a-efa5-4d8b-876e-506e8f185561,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.135229 containerd[1996]: time="2026-04-17T23:41:33.135187825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rfp8t,Uid:d03f1c1f-c3ff-4e65-bb8e-b95383713725,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.139220 containerd[1996]: time="2026-04-17T23:41:33.139177711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759f44599-zm69g,Uid:1550559b-67fa-454a-904e-63a2b926f540,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.211363 containerd[1996]: time="2026-04-17T23:41:33.211321850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-2f8gg,Uid:c35def7d-4496-4fa2-b60e-d4ff32612dae,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.600175 systemd[1]: Created slice kubepods-besteffort-podb8e90ce8_ae92_43fe_bead_1cd18e86c253.slice - libcontainer container kubepods-besteffort-podb8e90ce8_ae92_43fe_bead_1cd18e86c253.slice. Apr 17 23:41:33.662464 containerd[1996]: time="2026-04-17T23:41:33.662417583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfd6r,Uid:b8e90ce8-ae92-43fe-bead-1cd18e86c253,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:33.876842 containerd[1996]: time="2026-04-17T23:41:33.876484393Z" level=error msg="Failed to destroy network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.884178 containerd[1996]: time="2026-04-17T23:41:33.883062268Z" level=error msg="encountered an error cleaning up failed sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.884178 containerd[1996]: time="2026-04-17T23:41:33.883148146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tkjr8,Uid:0352e4c9-6574-48ee-8802-b8c6cbb3a9cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.884178 containerd[1996]: time="2026-04-17T23:41:33.883258770Z" level=error msg="Failed to destroy network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.884178 containerd[1996]: time="2026-04-17T23:41:33.883967909Z" level=error msg="encountered an error cleaning up failed sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.884178 containerd[1996]: time="2026-04-17T23:41:33.884028319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rfp8t,Uid:d03f1c1f-c3ff-4e65-bb8e-b95383713725,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.886387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e-shm.mount: Deactivated successfully. Apr 17 23:41:33.901510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18-shm.mount: Deactivated successfully. Apr 17 23:41:33.915692 containerd[1996]: time="2026-04-17T23:41:33.915635146Z" level=error msg="Failed to destroy network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.918914 containerd[1996]: time="2026-04-17T23:41:33.918858790Z" level=error msg="encountered an error cleaning up failed sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.919052 containerd[1996]: time="2026-04-17T23:41:33.918945648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-x8pnz,Uid:e385655f-a6bc-4d82-b34f-39a693e4f020,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.919112 containerd[1996]: time="2026-04-17T23:41:33.919073975Z" level=error msg="Failed to destroy network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.919813 containerd[1996]: time="2026-04-17T23:41:33.919769396Z" level=error msg="encountered an error cleaning up failed sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.919923 containerd[1996]: time="2026-04-17T23:41:33.919836294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff76c96b9-qlvcl,Uid:7681fd73-755e-41c9-ae03-ddbe78c2fcd5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.922412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab-shm.mount: Deactivated successfully. Apr 17 23:41:33.937374 containerd[1996]: time="2026-04-17T23:41:33.937312662Z" level=error msg="Failed to destroy network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.939094 containerd[1996]: time="2026-04-17T23:41:33.938843806Z" level=error msg="encountered an error cleaning up failed sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.939094 containerd[1996]: time="2026-04-17T23:41:33.938920464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-2f8gg,Uid:c35def7d-4496-4fa2-b60e-d4ff32612dae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.939094 containerd[1996]: time="2026-04-17T23:41:33.939090417Z" level=error msg="Failed to destroy network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.939426 containerd[1996]: time="2026-04-17T23:41:33.939394076Z" level=error msg="encountered an error cleaning up failed sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.939503 containerd[1996]: time="2026-04-17T23:41:33.939457032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759f44599-zm69g,Uid:1550559b-67fa-454a-904e-63a2b926f540,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.954872 containerd[1996]: time="2026-04-17T23:41:33.954429712Z" level=error msg="Failed to destroy network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.955581 containerd[1996]: time="2026-04-17T23:41:33.955357858Z" level=error msg="encountered an error cleaning up failed sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.955981 containerd[1996]: time="2026-04-17T23:41:33.955432139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-qfvhl,Uid:c163f60a-efa5-4d8b-876e-506e8f185561,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.960157 kubelet[3208]: E0417 23:41:33.960106 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.961754 kubelet[3208]: E0417 23:41:33.961416 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.962557 kubelet[3208]: E0417 23:41:33.962336 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:33.962557 kubelet[3208]: E0417 23:41:33.962381 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-rfp8t" Apr 17 23:41:33.962557 kubelet[3208]: E0417 23:41:33.962459 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-rfp8t_calico-system(d03f1c1f-c3ff-4e65-bb8e-b95383713725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-rfp8t_calico-system(d03f1c1f-c3ff-4e65-bb8e-b95383713725)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-rfp8t" podUID="d03f1c1f-c3ff-4e65-bb8e-b95383713725" Apr 17 23:41:33.962837 kubelet[3208]: E0417 23:41:33.962798 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" Apr 17 23:41:33.962905 kubelet[3208]: E0417 23:41:33.962840 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" Apr 17 23:41:33.962973 kubelet[3208]: E0417 23:41:33.962901 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845ff8dcf7-qfvhl_calico-system(c163f60a-efa5-4d8b-876e-506e8f185561)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845ff8dcf7-qfvhl_calico-system(c163f60a-efa5-4d8b-876e-506e8f185561)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" podUID="c163f60a-efa5-4d8b-876e-506e8f185561" Apr 17 23:41:33.962973 kubelet[3208]: E0417 23:41:33.957841 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.963460 kubelet[3208]: E0417 23:41:33.962970 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:33.963460 kubelet[3208]: E0417 23:41:33.962988 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-759f44599-zm69g" Apr 17 23:41:33.963460 kubelet[3208]: E0417 23:41:33.963030 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-759f44599-zm69g_calico-system(1550559b-67fa-454a-904e-63a2b926f540)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-759f44599-zm69g_calico-system(1550559b-67fa-454a-904e-63a2b926f540)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-759f44599-zm69g" podUID="1550559b-67fa-454a-904e-63a2b926f540" Apr 17 23:41:33.964806 kubelet[3208]: E0417 23:41:33.963064 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.964806 kubelet[3208]: E0417 23:41:33.963092 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-tkjr8" Apr 17 23:41:33.964806 kubelet[3208]: E0417 23:41:33.963110 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-tkjr8" Apr 17 23:41:33.965038 kubelet[3208]: E0417 23:41:33.963148 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-tkjr8_kube-system(0352e4c9-6574-48ee-8802-b8c6cbb3a9cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-tkjr8_kube-system(0352e4c9-6574-48ee-8802-b8c6cbb3a9cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-tkjr8" podUID="0352e4c9-6574-48ee-8802-b8c6cbb3a9cc" Apr 17 23:41:33.965038 kubelet[3208]: E0417 23:41:33.963222 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.965038 kubelet[3208]: E0417 23:41:33.963246 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" Apr 17 23:41:33.968543 kubelet[3208]: E0417 23:41:33.963264 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" Apr 17 23:41:33.968543 kubelet[3208]: E0417 23:41:33.963307 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ff76c96b9-qlvcl_calico-system(7681fd73-755e-41c9-ae03-ddbe78c2fcd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ff76c96b9-qlvcl_calico-system(7681fd73-755e-41c9-ae03-ddbe78c2fcd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" podUID="7681fd73-755e-41c9-ae03-ddbe78c2fcd5" Apr 17 23:41:33.968543 kubelet[3208]: E0417 23:41:33.964073 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.968780 kubelet[3208]: E0417 23:41:33.964108 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-x8pnz" Apr 17 23:41:33.968780 kubelet[3208]: E0417 23:41:33.964126 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-x8pnz" Apr 17 23:41:33.968780 kubelet[3208]: E0417 23:41:33.964168 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-x8pnz_kube-system(e385655f-a6bc-4d82-b34f-39a693e4f020)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-x8pnz_kube-system(e385655f-a6bc-4d82-b34f-39a693e4f020)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-x8pnz" podUID="e385655f-a6bc-4d82-b34f-39a693e4f020" Apr 17 23:41:33.968975 kubelet[3208]: E0417 23:41:33.964225 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:33.968975 kubelet[3208]: E0417 23:41:33.964245 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" Apr 17 23:41:33.968975 kubelet[3208]: E0417 23:41:33.964258 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" Apr 17 23:41:33.969113 kubelet[3208]: E0417 23:41:33.964307 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845ff8dcf7-2f8gg_calico-system(c35def7d-4496-4fa2-b60e-d4ff32612dae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845ff8dcf7-2f8gg_calico-system(c35def7d-4496-4fa2-b60e-d4ff32612dae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" podUID="c35def7d-4496-4fa2-b60e-d4ff32612dae" Apr 17 23:41:34.064855 containerd[1996]: time="2026-04-17T23:41:34.064785599Z" level=error msg="Failed to destroy network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:34.067556 containerd[1996]: time="2026-04-17T23:41:34.066787963Z" level=error msg="encountered an error cleaning up failed sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:34.067556 containerd[1996]: time="2026-04-17T23:41:34.066877629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfd6r,Uid:b8e90ce8-ae92-43fe-bead-1cd18e86c253,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:34.069071 kubelet[3208]: E0417 23:41:34.068731 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:41:34.069071 kubelet[3208]: E0417 23:41:34.068827 3208 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:34.069071 kubelet[3208]: E0417 23:41:34.068871 3208 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfd6r" Apr 17 23:41:34.069391 kubelet[3208]: E0417 23:41:34.068964 3208 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfd6r_calico-system(b8e90ce8-ae92-43fe-bead-1cd18e86c253)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfd6r_calico-system(b8e90ce8-ae92-43fe-bead-1cd18e86c253)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfd6r" podUID="b8e90ce8-ae92-43fe-bead-1cd18e86c253" Apr 17 23:41:34.144705 kubelet[3208]: I0417 23:41:34.143475 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-bd5g9" podStartSLOduration=2.423281716 podStartE2EDuration="23.143454959s" podCreationTimestamp="2026-04-17 23:41:11 +0000 UTC" firstStartedPulling="2026-04-17 23:41:12.144063879 +0000 UTC m=+22.718318308" lastFinishedPulling="2026-04-17 23:41:32.864237109 +0000 UTC m=+43.438491551" observedRunningTime="2026-04-17 23:41:34.034652381 +0000 UTC m=+44.608906834" watchObservedRunningTime="2026-04-17 23:41:34.143454959 +0000 UTC m=+44.717709412" Apr 17 23:41:34.477127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6-shm.mount: Deactivated successfully. Apr 17 23:41:34.477283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b-shm.mount: Deactivated successfully. Apr 17 23:41:34.477369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753-shm.mount: Deactivated successfully. Apr 17 23:41:34.477470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90-shm.mount: Deactivated successfully. Apr 17 23:41:34.477560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b-shm.mount: Deactivated successfully. Apr 17 23:41:34.983495 kubelet[3208]: I0417 23:41:34.983457 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:34.987457 kubelet[3208]: I0417 23:41:34.986996 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:34.987949 containerd[1996]: time="2026-04-17T23:41:34.987913103Z" level=info msg="StopPodSandbox for \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\"" Apr 17 23:41:34.990912 containerd[1996]: time="2026-04-17T23:41:34.989101058Z" level=info msg="StopPodSandbox for \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\"" Apr 17 23:41:34.993664 containerd[1996]: time="2026-04-17T23:41:34.993572985Z" level=info msg="Ensure that sandbox 0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b in task-service has been cleanup successfully" Apr 17 23:41:34.995164 containerd[1996]: time="2026-04-17T23:41:34.993572761Z" level=info msg="Ensure that sandbox bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6 in task-service has been cleanup successfully" Apr 17 23:41:34.998706 kubelet[3208]: I0417 23:41:34.998115 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:34.999023 containerd[1996]: time="2026-04-17T23:41:34.998993546Z" level=info msg="StopPodSandbox for \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\"" Apr 17 23:41:35.002168 containerd[1996]: time="2026-04-17T23:41:35.002135566Z" level=info msg="Ensure that sandbox 452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18 in task-service has been cleanup successfully" Apr 17 23:41:35.007083 kubelet[3208]: I0417 23:41:35.004813 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:35.007216 containerd[1996]: time="2026-04-17T23:41:35.006331728Z" level=info msg="StopPodSandbox for \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\"" Apr 17 23:41:35.007216 containerd[1996]: time="2026-04-17T23:41:35.006533902Z" level=info msg="Ensure that sandbox eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b in task-service has been cleanup successfully" Apr 17 23:41:35.009554 kubelet[3208]: I0417 23:41:35.009531 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:35.011060 containerd[1996]: time="2026-04-17T23:41:35.011018965Z" level=info msg="StopPodSandbox for \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\"" Apr 17 23:41:35.012241 containerd[1996]: time="2026-04-17T23:41:35.012201670Z" level=info msg="Ensure that sandbox 3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90 in task-service has been cleanup successfully" Apr 17 23:41:35.019002 kubelet[3208]: I0417 23:41:35.018768 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:35.023795 containerd[1996]: time="2026-04-17T23:41:35.022494739Z" level=info msg="StopPodSandbox for \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\"" Apr 17 23:41:35.023795 containerd[1996]: time="2026-04-17T23:41:35.022835974Z" level=info msg="Ensure that sandbox b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753 in task-service has been cleanup successfully" Apr 17 23:41:35.030445 kubelet[3208]: I0417 23:41:35.030396 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:35.031445 containerd[1996]: time="2026-04-17T23:41:35.031221972Z" level=info msg="StopPodSandbox for \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\"" Apr 17 23:41:35.036131 containerd[1996]: time="2026-04-17T23:41:35.035430973Z" level=info msg="Ensure that sandbox c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e in task-service has been cleanup successfully" Apr 17 23:41:35.044618 kubelet[3208]: I0417 23:41:35.043764 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:35.045679 containerd[1996]: time="2026-04-17T23:41:35.045629202Z" level=info msg="StopPodSandbox for \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\"" Apr 17 23:41:35.050083 containerd[1996]: time="2026-04-17T23:41:35.050040580Z" level=info msg="Ensure that sandbox 22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab in task-service has been cleanup successfully" Apr 17 23:41:35.222436 systemd[1]: run-containerd-runc-k8s.io-57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a-runc.whp81z.mount: Deactivated successfully. Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.440 [INFO][4658] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.443 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" iface="eth0" netns="/var/run/netns/cni-d029af1d-a254-6e61-7eb1-a173386265bf" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.444 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" iface="eth0" netns="/var/run/netns/cni-d029af1d-a254-6e61-7eb1-a173386265bf" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.444 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" iface="eth0" netns="/var/run/netns/cni-d029af1d-a254-6e61-7eb1-a173386265bf" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.444 [INFO][4658] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.444 [INFO][4658] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.784 [INFO][4786] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.790 [INFO][4786] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.811 [INFO][4786] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.823 [WARNING][4786] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.823 [INFO][4786] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.825 [INFO][4786] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:35.834457 containerd[1996]: 2026-04-17 23:41:35.830 [INFO][4658] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:35.836890 containerd[1996]: time="2026-04-17T23:41:35.836693672Z" level=info msg="TearDown network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" successfully" Apr 17 23:41:35.836890 containerd[1996]: time="2026-04-17T23:41:35.836732952Z" level=info msg="StopPodSandbox for \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" returns successfully" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.282 [INFO][4647] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.282 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" iface="eth0" netns="/var/run/netns/cni-f4430d26-04a2-83d1-1571-dc0c00bfb218" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.282 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" iface="eth0" netns="/var/run/netns/cni-f4430d26-04a2-83d1-1571-dc0c00bfb218" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.284 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" iface="eth0" netns="/var/run/netns/cni-f4430d26-04a2-83d1-1571-dc0c00bfb218" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.284 [INFO][4647] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.284 [INFO][4647] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.757 [INFO][4750] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.761 [INFO][4750] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.762 [INFO][4750] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.802 [WARNING][4750] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.802 [INFO][4750] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.811 [INFO][4750] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:35.841986 containerd[1996]: 2026-04-17 23:41:35.829 [INFO][4647] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:35.845191 systemd[1]: run-netns-cni\x2dd029af1d\x2da254\x2d6e61\x2d7eb1\x2da173386265bf.mount: Deactivated successfully. Apr 17 23:41:35.854702 containerd[1996]: time="2026-04-17T23:41:35.845464012Z" level=info msg="TearDown network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" successfully" Apr 17 23:41:35.854702 containerd[1996]: time="2026-04-17T23:41:35.845499879Z" level=info msg="StopPodSandbox for \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" returns successfully" Apr 17 23:41:35.854702 containerd[1996]: time="2026-04-17T23:41:35.852488809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfd6r,Uid:b8e90ce8-ae92-43fe-bead-1cd18e86c253,Namespace:calico-system,Attempt:1,}" Apr 17 23:41:35.856852 containerd[1996]: time="2026-04-17T23:41:35.856811690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff76c96b9-qlvcl,Uid:7681fd73-755e-41c9-ae03-ddbe78c2fcd5,Namespace:calico-system,Attempt:1,}" Apr 17 23:41:35.857802 systemd[1]: run-netns-cni\x2df4430d26\x2d04a2\x2d83d1\x2d1571\x2ddc0c00bfb218.mount: Deactivated successfully. Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.386 [INFO][4693] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.387 [INFO][4693] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" iface="eth0" netns="/var/run/netns/cni-279268f6-06d9-f15f-2ff3-bce5c53b6fe1" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.387 [INFO][4693] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" iface="eth0" netns="/var/run/netns/cni-279268f6-06d9-f15f-2ff3-bce5c53b6fe1" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.388 [INFO][4693] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" iface="eth0" netns="/var/run/netns/cni-279268f6-06d9-f15f-2ff3-bce5c53b6fe1" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.388 [INFO][4693] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.388 [INFO][4693] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.787 [INFO][4772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.797 [INFO][4772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.826 [INFO][4772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.867 [WARNING][4772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.867 [INFO][4772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.873 [INFO][4772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:35.892123 containerd[1996]: 2026-04-17 23:41:35.882 [INFO][4693] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:35.895244 containerd[1996]: time="2026-04-17T23:41:35.892352448Z" level=info msg="TearDown network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" successfully" Apr 17 23:41:35.895244 containerd[1996]: time="2026-04-17T23:41:35.892385220Z" level=info msg="StopPodSandbox for \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" returns successfully" Apr 17 23:41:35.901420 systemd[1]: run-netns-cni\x2d279268f6\x2d06d9\x2df15f\x2d2ff3\x2dbce5c53b6fe1.mount: Deactivated successfully. Apr 17 23:41:35.904803 containerd[1996]: time="2026-04-17T23:41:35.902846405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-qfvhl,Uid:c163f60a-efa5-4d8b-876e-506e8f185561,Namespace:calico-system,Attempt:1,}" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.483 [INFO][4716] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.483 [INFO][4716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" iface="eth0" netns="/var/run/netns/cni-558ab6a3-4fd8-db81-2487-ff95216879ca" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.484 [INFO][4716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" iface="eth0" netns="/var/run/netns/cni-558ab6a3-4fd8-db81-2487-ff95216879ca" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.490 [INFO][4716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" iface="eth0" netns="/var/run/netns/cni-558ab6a3-4fd8-db81-2487-ff95216879ca" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.490 [INFO][4716] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.490 [INFO][4716] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.795 [INFO][4809] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.800 [INFO][4809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.873 [INFO][4809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.889 [WARNING][4809] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.889 [INFO][4809] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.897 [INFO][4809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:35.939012 containerd[1996]: 2026-04-17 23:41:35.919 [INFO][4716] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:35.943680 containerd[1996]: time="2026-04-17T23:41:35.939490477Z" level=info msg="TearDown network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" successfully" Apr 17 23:41:35.943680 containerd[1996]: time="2026-04-17T23:41:35.939530930Z" level=info msg="StopPodSandbox for \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" returns successfully" Apr 17 23:41:35.952138 containerd[1996]: time="2026-04-17T23:41:35.952082695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tkjr8,Uid:0352e4c9-6574-48ee-8802-b8c6cbb3a9cc,Namespace:kube-system,Attempt:1,}" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.333 [INFO][4671] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.334 [INFO][4671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" iface="eth0" netns="/var/run/netns/cni-20c26292-dc33-7b45-9730-1bb11ab0b1d3" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.335 [INFO][4671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" iface="eth0" netns="/var/run/netns/cni-20c26292-dc33-7b45-9730-1bb11ab0b1d3" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.336 [INFO][4671] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" iface="eth0" netns="/var/run/netns/cni-20c26292-dc33-7b45-9730-1bb11ab0b1d3" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.336 [INFO][4671] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.336 [INFO][4671] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.802 [INFO][4764] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.803 [INFO][4764] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.907 [INFO][4764] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.948 [WARNING][4764] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.950 [INFO][4764] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.962 [INFO][4764] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:35.973346 containerd[1996]: 2026-04-17 23:41:35.967 [INFO][4671] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:35.973346 containerd[1996]: time="2026-04-17T23:41:35.973274676Z" level=info msg="TearDown network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" successfully" Apr 17 23:41:35.973346 containerd[1996]: time="2026-04-17T23:41:35.973308165Z" level=info msg="StopPodSandbox for \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" returns successfully" Apr 17 23:41:35.980296 containerd[1996]: time="2026-04-17T23:41:35.980253388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rfp8t,Uid:d03f1c1f-c3ff-4e65-bb8e-b95383713725,Namespace:calico-system,Attempt:1,}" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.394 [INFO][4688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.394 [INFO][4688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" iface="eth0" netns="/var/run/netns/cni-700f6a38-a4ee-6e27-d593-efca1678df1f" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.395 [INFO][4688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" iface="eth0" netns="/var/run/netns/cni-700f6a38-a4ee-6e27-d593-efca1678df1f" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.396 [INFO][4688] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" iface="eth0" netns="/var/run/netns/cni-700f6a38-a4ee-6e27-d593-efca1678df1f" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.396 [INFO][4688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.396 [INFO][4688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.803 [INFO][4774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.804 [INFO][4774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.965 [INFO][4774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.994 [WARNING][4774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:35.994 [INFO][4774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:36.005 [INFO][4774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:36.031736 containerd[1996]: 2026-04-17 23:41:36.021 [INFO][4688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:36.033617 containerd[1996]: time="2026-04-17T23:41:36.032944299Z" level=info msg="TearDown network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" successfully" Apr 17 23:41:36.034441 containerd[1996]: time="2026-04-17T23:41:36.033201927Z" level=info msg="StopPodSandbox for \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" returns successfully" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.427 [INFO][4670] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.432 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" iface="eth0" netns="/var/run/netns/cni-65fdd014-e362-4790-0cb5-5ec6892466e9" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.432 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" iface="eth0" netns="/var/run/netns/cni-65fdd014-e362-4790-0cb5-5ec6892466e9" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.438 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" iface="eth0" netns="/var/run/netns/cni-65fdd014-e362-4790-0cb5-5ec6892466e9" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.438 [INFO][4670] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.438 [INFO][4670] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.807 [INFO][4782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:35.807 [INFO][4782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:36.007 [INFO][4782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:36.033 [WARNING][4782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:36.033 [INFO][4782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:36.038 [INFO][4782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:36.121509 containerd[1996]: 2026-04-17 23:41:36.064 [INFO][4670] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:36.124611 containerd[1996]: time="2026-04-17T23:41:36.124274924Z" level=info msg="TearDown network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" successfully" Apr 17 23:41:36.124611 containerd[1996]: time="2026-04-17T23:41:36.124318267Z" level=info msg="StopPodSandbox for \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" returns successfully" Apr 17 23:41:36.134652 containerd[1996]: time="2026-04-17T23:41:36.133958200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-2f8gg,Uid:c35def7d-4496-4fa2-b60e-d4ff32612dae,Namespace:calico-system,Attempt:1,}" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.438 [INFO][4721] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.441 [INFO][4721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" iface="eth0" netns="/var/run/netns/cni-1c7336d7-10af-18a8-54b9-ccc7f10bc2eb" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.443 [INFO][4721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" iface="eth0" netns="/var/run/netns/cni-1c7336d7-10af-18a8-54b9-ccc7f10bc2eb" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.447 [INFO][4721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" iface="eth0" netns="/var/run/netns/cni-1c7336d7-10af-18a8-54b9-ccc7f10bc2eb" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.447 [INFO][4721] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.447 [INFO][4721] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.808 [INFO][4785] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:35.808 [INFO][4785] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:36.045 [INFO][4785] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:36.141 [WARNING][4785] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:36.141 [INFO][4785] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:36.149 [INFO][4785] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:36.198379 containerd[1996]: 2026-04-17 23:41:36.160 [INFO][4721] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:36.200405 containerd[1996]: time="2026-04-17T23:41:36.199838599Z" level=info msg="TearDown network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" successfully" Apr 17 23:41:36.200405 containerd[1996]: time="2026-04-17T23:41:36.199923585Z" level=info msg="StopPodSandbox for \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" returns successfully" Apr 17 23:41:36.206959 kubelet[3208]: I0417 23:41:36.206093 3208 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-whisker-ca-bundle\") pod \"1550559b-67fa-454a-904e-63a2b926f540\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " Apr 17 23:41:36.206959 kubelet[3208]: I0417 23:41:36.206204 3208 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/1550559b-67fa-454a-904e-63a2b926f540-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1550559b-67fa-454a-904e-63a2b926f540-whisker-backend-key-pair\") pod \"1550559b-67fa-454a-904e-63a2b926f540\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " Apr 17 23:41:36.206959 kubelet[3208]: I0417 23:41:36.206256 3208 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-nginx-config\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-nginx-config\") pod \"1550559b-67fa-454a-904e-63a2b926f540\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " Apr 17 23:41:36.206959 kubelet[3208]: I0417 23:41:36.206290 3208 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1550559b-67fa-454a-904e-63a2b926f540-kube-api-access-j9hft\" (UniqueName: \"kubernetes.io/projected/1550559b-67fa-454a-904e-63a2b926f540-kube-api-access-j9hft\") pod \"1550559b-67fa-454a-904e-63a2b926f540\" (UID: \"1550559b-67fa-454a-904e-63a2b926f540\") " Apr 17 23:41:36.376498 kubelet[3208]: I0417 23:41:36.372638 3208 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-whisker-ca-bundle" pod "1550559b-67fa-454a-904e-63a2b926f540" (UID: "1550559b-67fa-454a-904e-63a2b926f540"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:41:36.376498 kubelet[3208]: I0417 23:41:36.372663 3208 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-nginx-config" pod "1550559b-67fa-454a-904e-63a2b926f540" (UID: "1550559b-67fa-454a-904e-63a2b926f540"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:41:36.391608 containerd[1996]: time="2026-04-17T23:41:36.391530074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-x8pnz,Uid:e385655f-a6bc-4d82-b34f-39a693e4f020,Namespace:kube-system,Attempt:1,}" Apr 17 23:41:36.393742 kubelet[3208]: I0417 23:41:36.393665 3208 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1550559b-67fa-454a-904e-63a2b926f540-whisker-backend-key-pair" pod "1550559b-67fa-454a-904e-63a2b926f540" (UID: "1550559b-67fa-454a-904e-63a2b926f540"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:41:36.410387 kubelet[3208]: I0417 23:41:36.410341 3208 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1550559b-67fa-454a-904e-63a2b926f540-kube-api-access-j9hft" pod "1550559b-67fa-454a-904e-63a2b926f540" (UID: "1550559b-67fa-454a-904e-63a2b926f540"). InnerVolumeSpecName "kube-api-access-j9hft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:41:36.473056 kubelet[3208]: I0417 23:41:36.472958 3208 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-nginx-config\") on node \"ip-172-31-24-87\" DevicePath \"\"" Apr 17 23:41:36.473056 kubelet[3208]: I0417 23:41:36.472997 3208 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j9hft\" (UniqueName: \"kubernetes.io/projected/1550559b-67fa-454a-904e-63a2b926f540-kube-api-access-j9hft\") on node \"ip-172-31-24-87\" DevicePath \"\"" Apr 17 23:41:36.473056 kubelet[3208]: I0417 23:41:36.473015 3208 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1550559b-67fa-454a-904e-63a2b926f540-whisker-ca-bundle\") on node \"ip-172-31-24-87\" DevicePath \"\"" Apr 17 23:41:36.473056 kubelet[3208]: I0417 23:41:36.473028 3208 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1550559b-67fa-454a-904e-63a2b926f540-whisker-backend-key-pair\") on node \"ip-172-31-24-87\" DevicePath \"\"" Apr 17 23:41:36.871564 systemd[1]: run-netns-cni\x2d65fdd014\x2de362\x2d4790\x2d0cb5\x2d5ec6892466e9.mount: Deactivated successfully. Apr 17 23:41:36.871724 systemd[1]: run-netns-cni\x2d20c26292\x2ddc33\x2d7b45\x2d9730\x2d1bb11ab0b1d3.mount: Deactivated successfully. Apr 17 23:41:36.871806 systemd[1]: run-netns-cni\x2d558ab6a3\x2d4fd8\x2ddb81\x2d2487\x2dff95216879ca.mount: Deactivated successfully. Apr 17 23:41:36.871884 systemd[1]: run-netns-cni\x2d700f6a38\x2da4ee\x2d6e27\x2dd593\x2defca1678df1f.mount: Deactivated successfully. Apr 17 23:41:36.871955 systemd[1]: run-netns-cni\x2d1c7336d7\x2d10af\x2d18a8\x2d54b9\x2dccc7f10bc2eb.mount: Deactivated successfully. Apr 17 23:41:36.872032 systemd[1]: var-lib-kubelet-pods-1550559b\x2d67fa\x2d454a\x2d904e\x2d63a2b926f540-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9hft.mount: Deactivated successfully. Apr 17 23:41:36.872116 systemd[1]: var-lib-kubelet-pods-1550559b\x2d67fa\x2d454a\x2d904e\x2d63a2b926f540-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:41:37.046758 systemd-networkd[1885]: cali75a74ef518c: Link UP Apr 17 23:41:37.047083 systemd-networkd[1885]: cali75a74ef518c: Gained carrier Apr 17 23:41:37.053161 (udev-worker)[5050]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:41:37.202036 systemd[1]: Removed slice kubepods-besteffort-pod1550559b_67fa_454a_904e_63a2b926f540.slice - libcontainer container kubepods-besteffort-pod1550559b_67fa_454a_904e_63a2b926f540.slice. Apr 17 23:41:37.222212 systemd-networkd[1885]: cali879e9180a24: Link UP Apr 17 23:41:37.222504 systemd-networkd[1885]: cali879e9180a24: Gained carrier Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.015 [ERROR][4915] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.080 [INFO][4915] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0 csi-node-driver- calico-system b8e90ce8-ae92-43fe-bead-1cd18e86c253 941 0 2026-04-17 23:41:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-87 csi-node-driver-jfd6r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali75a74ef518c [] [] }} ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.080 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.600 [INFO][4958] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" HandleID="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.623 [INFO][4958] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" HandleID="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b6720), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"csi-node-driver-jfd6r", "timestamp":"2026-04-17 23:41:36.60069552 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.623 [INFO][4958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.623 [INFO][4958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.623 [INFO][4958] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.671 [INFO][4958] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.692 [INFO][4958] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.746 [INFO][4958] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.766 [INFO][4958] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.789 [INFO][4958] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.789 [INFO][4958] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.802 [INFO][4958] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.837 [INFO][4958] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.897 [INFO][4958] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.129/26] block=192.168.66.128/26 handle="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.901 [INFO][4958] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.129/26] handle="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" host="ip-172-31-24-87" Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.908 [INFO][4958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:37.233972 containerd[1996]: 2026-04-17 23:41:36.909 [INFO][4958] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.129/26] IPv6=[] ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" HandleID="k8s-pod-network.9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:36.964 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e90ce8-ae92-43fe-bead-1cd18e86c253", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"csi-node-driver-jfd6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75a74ef518c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:36.969 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.129/32] ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:36.971 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75a74ef518c ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:37.048 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:37.050 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e90ce8-ae92-43fe-bead-1cd18e86c253", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d", Pod:"csi-node-driver-jfd6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75a74ef518c", MAC:"12:0b:f1:b4:55:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.241249 containerd[1996]: 2026-04-17 23:41:37.187 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d" Namespace="calico-system" Pod="csi-node-driver-jfd6r" WorkloadEndpoint="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.107 [ERROR][4927] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.148 [INFO][4927] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0 calico-apiserver-845ff8dcf7- calico-system c163f60a-efa5-4d8b-876e-506e8f185561 937 0 2026-04-17 23:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845ff8dcf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-87 calico-apiserver-845ff8dcf7-qfvhl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali879e9180a24 [] [] }} ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.148 [INFO][4927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.712 [INFO][4968] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" HandleID="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.797 [INFO][4968] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" HandleID="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f9150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"calico-apiserver-845ff8dcf7-qfvhl", "timestamp":"2026-04-17 23:41:36.712250069 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d4840)} Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.797 [INFO][4968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.911 [INFO][4968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.911 [INFO][4968] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:36.963 [INFO][4968] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.032 [INFO][4968] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.072 [INFO][4968] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.076 [INFO][4968] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.081 [INFO][4968] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.081 [INFO][4968] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.084 [INFO][4968] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289 Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.094 [INFO][4968] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.158 [INFO][4968] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.130/26] block=192.168.66.128/26 handle="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.159 [INFO][4968] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.130/26] handle="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" host="ip-172-31-24-87" Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.159 [INFO][4968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:37.438615 containerd[1996]: 2026-04-17 23:41:37.159 [INFO][4968] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.130/26] IPv6=[] ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" HandleID="k8s-pod-network.16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.213 [INFO][4927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c163f60a-efa5-4d8b-876e-506e8f185561", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"calico-apiserver-845ff8dcf7-qfvhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali879e9180a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.213 [INFO][4927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.130/32] ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.213 [INFO][4927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali879e9180a24 ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.220 [INFO][4927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.253 [INFO][4927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c163f60a-efa5-4d8b-876e-506e8f185561", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289", Pod:"calico-apiserver-845ff8dcf7-qfvhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali879e9180a24", MAC:"92:a5:f3:e4:f3:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.442739 containerd[1996]: 2026-04-17 23:41:37.384 [INFO][4927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-qfvhl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:37.601721 kubelet[3208]: I0417 23:41:37.600628 3208 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1550559b-67fa-454a-904e-63a2b926f540" path="/var/lib/kubelet/pods/1550559b-67fa-454a-904e-63a2b926f540/volumes" Apr 17 23:41:37.640128 containerd[1996]: time="2026-04-17T23:41:37.637936523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:37.640128 containerd[1996]: time="2026-04-17T23:41:37.638019541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:37.640128 containerd[1996]: time="2026-04-17T23:41:37.638044474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.640128 containerd[1996]: time="2026-04-17T23:41:37.638161540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.678430 systemd-networkd[1885]: cali3031c89f48d: Link UP Apr 17 23:41:37.680505 systemd-networkd[1885]: cali3031c89f48d: Gained carrier Apr 17 23:41:37.690255 containerd[1996]: time="2026-04-17T23:41:37.687993874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:37.690255 containerd[1996]: time="2026-04-17T23:41:37.688092353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:37.697613 containerd[1996]: time="2026-04-17T23:41:37.692161050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.697613 containerd[1996]: time="2026-04-17T23:41:37.692290082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.744024 systemd[1]: Started cri-containerd-16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289.scope - libcontainer container 16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289. Apr 17 23:41:37.774219 systemd[1]: Started cri-containerd-9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d.scope - libcontainer container 9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d. Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:36.379 [ERROR][4906] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:36.450 [INFO][4906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0 calico-kube-controllers-7ff76c96b9- calico-system 7681fd73-755e-41c9-ae03-ddbe78c2fcd5 935 0 2026-04-17 23:41:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ff76c96b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-87 calico-kube-controllers-7ff76c96b9-qlvcl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3031c89f48d [] [] }} ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:36.450 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.115 [INFO][5004] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" HandleID="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.214 [INFO][5004] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" HandleID="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030e190), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"calico-kube-controllers-7ff76c96b9-qlvcl", "timestamp":"2026-04-17 23:41:37.115664391 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e2000)} Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.219 [INFO][5004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.219 [INFO][5004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.219 [INFO][5004] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.311 [INFO][5004] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.446 [INFO][5004] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.490 [INFO][5004] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.505 [INFO][5004] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.539 [INFO][5004] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.539 [INFO][5004] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.567 [INFO][5004] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.579 [INFO][5004] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.630 [INFO][5004] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.131/26] block=192.168.66.128/26 handle="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.630 [INFO][5004] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.131/26] handle="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" host="ip-172-31-24-87" Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.630 [INFO][5004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:37.788840 containerd[1996]: 2026-04-17 23:41:37.630 [INFO][5004] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.131/26] IPv6=[] ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" HandleID="k8s-pod-network.fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.657 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0", GenerateName:"calico-kube-controllers-7ff76c96b9-", Namespace:"calico-system", SelfLink:"", UID:"7681fd73-755e-41c9-ae03-ddbe78c2fcd5", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff76c96b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"calico-kube-controllers-7ff76c96b9-qlvcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3031c89f48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.670 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.131/32] ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.670 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3031c89f48d ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.673 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.677 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0", GenerateName:"calico-kube-controllers-7ff76c96b9-", Namespace:"calico-system", SelfLink:"", UID:"7681fd73-755e-41c9-ae03-ddbe78c2fcd5", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff76c96b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d", Pod:"calico-kube-controllers-7ff76c96b9-qlvcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3031c89f48d", MAC:"82:29:4b:3d:dd:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:37.794158 containerd[1996]: 2026-04-17 23:41:37.781 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d" Namespace="calico-system" Pod="calico-kube-controllers-7ff76c96b9-qlvcl" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:37.906927 containerd[1996]: time="2026-04-17T23:41:37.904897241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:37.913620 containerd[1996]: time="2026-04-17T23:41:37.909822614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:37.913620 containerd[1996]: time="2026-04-17T23:41:37.909863120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.913620 containerd[1996]: time="2026-04-17T23:41:37.909989533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:37.925798 systemd[1]: Created slice kubepods-besteffort-podeb89fa77_3d82_4bca_b3a8_fca50d32a994.slice - libcontainer container kubepods-besteffort-podeb89fa77_3d82_4bca_b3a8_fca50d32a994.slice. Apr 17 23:41:37.958561 systemd-networkd[1885]: calie888b07c0c9: Link UP Apr 17 23:41:37.959982 systemd-networkd[1885]: calie888b07c0c9: Gained carrier Apr 17 23:41:38.004798 systemd[1]: Started cri-containerd-fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d.scope - libcontainer container fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d. Apr 17 23:41:38.010200 kubelet[3208]: I0417 23:41:38.010134 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb89fa77-3d82-4bca-b3a8-fca50d32a994-whisker-ca-bundle\") pod \"whisker-5669bb44bd-8wz55\" (UID: \"eb89fa77-3d82-4bca-b3a8-fca50d32a994\") " pod="calico-system/whisker-5669bb44bd-8wz55" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:36.528 [ERROR][4939] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:36.736 [INFO][4939] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0 coredns-7d764666f9- kube-system 0352e4c9-6574-48ee-8802-b8c6cbb3a9cc 943 0 2026-04-17 23:40:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-87 coredns-7d764666f9-tkjr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie888b07c0c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:36.737 [INFO][4939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.193 [INFO][5036] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" HandleID="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.221 [INFO][5036] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" HandleID="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361b40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-87", "pod":"coredns-7d764666f9-tkjr8", "timestamp":"2026-04-17 23:41:37.193740178 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040cf20)} Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.222 [INFO][5036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.630 [INFO][5036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.631 [INFO][5036] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.645 [INFO][5036] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.736 [INFO][5036] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.807 [INFO][5036] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.824 [INFO][5036] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.849 [INFO][5036] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.853 [INFO][5036] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.867 [INFO][5036] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2 Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.896 [INFO][5036] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.938 [INFO][5036] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.132/26] block=192.168.66.128/26 handle="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.940 [INFO][5036] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.132/26] handle="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" host="ip-172-31-24-87" Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.943 [INFO][5036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:38.011572 containerd[1996]: 2026-04-17 23:41:37.943 [INFO][5036] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.132/26] IPv6=[] ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" HandleID="k8s-pod-network.ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.953 [INFO][4939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"coredns-7d764666f9-tkjr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie888b07c0c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.953 [INFO][4939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.132/32] ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.953 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie888b07c0c9 ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.959 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.961 [INFO][4939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2", Pod:"coredns-7d764666f9-tkjr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie888b07c0c9", MAC:"de:fe:6d:96:36:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.012518 containerd[1996]: 2026-04-17 23:41:37.981 [INFO][4939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2" Namespace="kube-system" Pod="coredns-7d764666f9-tkjr8" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:38.019246 kubelet[3208]: I0417 23:41:38.018799 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwqk\" (UniqueName: \"kubernetes.io/projected/eb89fa77-3d82-4bca-b3a8-fca50d32a994-kube-api-access-7lwqk\") pod \"whisker-5669bb44bd-8wz55\" (UID: \"eb89fa77-3d82-4bca-b3a8-fca50d32a994\") " pod="calico-system/whisker-5669bb44bd-8wz55" Apr 17 23:41:38.019750 kubelet[3208]: I0417 23:41:38.019717 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/eb89fa77-3d82-4bca-b3a8-fca50d32a994-nginx-config\") pod \"whisker-5669bb44bd-8wz55\" (UID: \"eb89fa77-3d82-4bca-b3a8-fca50d32a994\") " pod="calico-system/whisker-5669bb44bd-8wz55" Apr 17 23:41:38.020006 kubelet[3208]: I0417 23:41:38.019973 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb89fa77-3d82-4bca-b3a8-fca50d32a994-whisker-backend-key-pair\") pod \"whisker-5669bb44bd-8wz55\" (UID: \"eb89fa77-3d82-4bca-b3a8-fca50d32a994\") " pod="calico-system/whisker-5669bb44bd-8wz55" Apr 17 23:41:38.106773 systemd-networkd[1885]: cali36b41feee3f: Link UP Apr 17 23:41:38.109777 systemd-networkd[1885]: cali36b41feee3f: Gained carrier Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:36.303 [ERROR][4948] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:36.558 [INFO][4948] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0 goldmane-9f7667bb8- calico-system d03f1c1f-c3ff-4e65-bb8e-b95383713725 936 0 2026-04-17 23:41:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-87 goldmane-9f7667bb8-rfp8t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali36b41feee3f [] [] }} ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:36.558 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.206 [INFO][5018] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" HandleID="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.345 [INFO][5018] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" HandleID="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103500), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"goldmane-9f7667bb8-rfp8t", "timestamp":"2026-04-17 23:41:37.206929849 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000246420)} Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.345 [INFO][5018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.940 [INFO][5018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.940 [INFO][5018] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.966 [INFO][5018] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:37.998 [INFO][5018] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.020 [INFO][5018] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.024 [INFO][5018] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.031 [INFO][5018] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.031 [INFO][5018] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.033 [INFO][5018] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1 Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.045 [INFO][5018] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.063 [INFO][5018] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.133/26] block=192.168.66.128/26 handle="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.064 [INFO][5018] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.133/26] handle="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" host="ip-172-31-24-87" Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.065 [INFO][5018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:38.184570 containerd[1996]: 2026-04-17 23:41:38.066 [INFO][5018] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.133/26] IPv6=[] ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" HandleID="k8s-pod-network.1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.089 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"d03f1c1f-c3ff-4e65-bb8e-b95383713725", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"goldmane-9f7667bb8-rfp8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36b41feee3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.089 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.133/32] ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.089 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36b41feee3f ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.110 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.112 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"d03f1c1f-c3ff-4e65-bb8e-b95383713725", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1", Pod:"goldmane-9f7667bb8-rfp8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36b41feee3f", MAC:"3e:8e:f1:c1:07:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.186765 containerd[1996]: 2026-04-17 23:41:38.161 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1" Namespace="calico-system" Pod="goldmane-9f7667bb8-rfp8t" WorkloadEndpoint="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:38.202815 systemd-networkd[1885]: cali75a74ef518c: Gained IPv6LL Apr 17 23:41:38.206439 systemd-networkd[1885]: cali04379fab760: Link UP Apr 17 23:41:38.208136 systemd-networkd[1885]: cali04379fab760: Gained carrier Apr 17 23:41:38.220540 containerd[1996]: time="2026-04-17T23:41:38.219786900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:38.224191 containerd[1996]: time="2026-04-17T23:41:38.223946216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:38.224191 containerd[1996]: time="2026-04-17T23:41:38.223988513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.224191 containerd[1996]: time="2026-04-17T23:41:38.224121745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.250923 containerd[1996]: time="2026-04-17T23:41:38.249669008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfd6r,Uid:b8e90ce8-ae92-43fe-bead-1cd18e86c253,Namespace:calico-system,Attempt:1,} returns sandbox id \"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d\"" Apr 17 23:41:38.264077 containerd[1996]: time="2026-04-17T23:41:38.260310824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5669bb44bd-8wz55,Uid:eb89fa77-3d82-4bca-b3a8-fca50d32a994,Namespace:calico-system,Attempt:0,}" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:36.797 [ERROR][4976] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:36.931 [INFO][4976] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0 calico-apiserver-845ff8dcf7- calico-system c35def7d-4496-4fa2-b60e-d4ff32612dae 939 0 2026-04-17 23:41:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845ff8dcf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-87 calico-apiserver-845ff8dcf7-2f8gg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali04379fab760 [] [] }} ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:36.932 [INFO][4976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:37.365 [INFO][5046] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" HandleID="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:37.436 [INFO][5046] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" HandleID="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e1b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"calico-apiserver-845ff8dcf7-2f8gg", "timestamp":"2026-04-17 23:41:37.365077303 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000298580)} Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:37.446 [INFO][5046] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.068 [INFO][5046] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.068 [INFO][5046] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.073 [INFO][5046] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.104 [INFO][5046] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.117 [INFO][5046] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.128 [INFO][5046] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.157 [INFO][5046] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.157 [INFO][5046] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.161 [INFO][5046] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98 Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.178 [INFO][5046] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.190 [INFO][5046] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.134/26] block=192.168.66.128/26 handle="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.191 [INFO][5046] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.134/26] handle="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" host="ip-172-31-24-87" Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.191 [INFO][5046] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:38.287842 containerd[1996]: 2026-04-17 23:41:38.191 [INFO][5046] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.134/26] IPv6=[] ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" HandleID="k8s-pod-network.dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.195 [INFO][4976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c35def7d-4496-4fa2-b60e-d4ff32612dae", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"calico-apiserver-845ff8dcf7-2f8gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04379fab760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.196 [INFO][4976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.134/32] ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.196 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04379fab760 ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.204 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.206 [INFO][4976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c35def7d-4496-4fa2-b60e-d4ff32612dae", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98", Pod:"calico-apiserver-845ff8dcf7-2f8gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04379fab760", MAC:"42:d6:93:f9:6c:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.288930 containerd[1996]: 2026-04-17 23:41:38.253 [INFO][4976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98" Namespace="calico-system" Pod="calico-apiserver-845ff8dcf7-2f8gg" WorkloadEndpoint="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:38.311782 containerd[1996]: time="2026-04-17T23:41:38.310970105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:41:38.320869 systemd[1]: Started cri-containerd-ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2.scope - libcontainer container ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2. Apr 17 23:41:38.398540 systemd-networkd[1885]: caliaa0efc8c7d8: Link UP Apr 17 23:41:38.399278 systemd-networkd[1885]: caliaa0efc8c7d8: Gained carrier Apr 17 23:41:38.409052 containerd[1996]: time="2026-04-17T23:41:38.375185556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:38.409052 containerd[1996]: time="2026-04-17T23:41:38.375255376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:38.409052 containerd[1996]: time="2026-04-17T23:41:38.375282614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.409052 containerd[1996]: time="2026-04-17T23:41:38.375395963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.431648 containerd[1996]: time="2026-04-17T23:41:38.381779383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:38.431648 containerd[1996]: time="2026-04-17T23:41:38.421862119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:38.431648 containerd[1996]: time="2026-04-17T23:41:38.423933117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.431648 containerd[1996]: time="2026-04-17T23:41:38.424088214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.176 [ERROR][5000] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.396 [INFO][5000] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0 coredns-7d764666f9- kube-system e385655f-a6bc-4d82-b34f-39a693e4f020 940 0 2026-04-17 23:40:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-87 coredns-7d764666f9-x8pnz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa0efc8c7d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.396 [INFO][5000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.663 [INFO][5087] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" HandleID="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.793 [INFO][5087] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" HandleID="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000458920), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-87", "pod":"coredns-7d764666f9-x8pnz", "timestamp":"2026-04-17 23:41:37.663714239 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188580)} Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:37.793 [INFO][5087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.194 [INFO][5087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.194 [INFO][5087] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.203 [INFO][5087] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.227 [INFO][5087] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.283 [INFO][5087] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.301 [INFO][5087] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.314 [INFO][5087] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.314 [INFO][5087] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.319 [INFO][5087] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.331 [INFO][5087] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.363 [INFO][5087] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.135/26] block=192.168.66.128/26 handle="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.363 [INFO][5087] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.135/26] handle="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" host="ip-172-31-24-87" Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.367 [INFO][5087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:38.490365 containerd[1996]: 2026-04-17 23:41:38.367 [INFO][5087] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.135/26] IPv6=[] ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" HandleID="k8s-pod-network.3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.489853 systemd[1]: Started cri-containerd-dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98.scope - libcontainer container dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98. Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.382 [INFO][5000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e385655f-a6bc-4d82-b34f-39a693e4f020", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"coredns-7d764666f9-x8pnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa0efc8c7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.382 [INFO][5000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.135/32] ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.382 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa0efc8c7d8 ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.400 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.401 [INFO][5000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e385655f-a6bc-4d82-b34f-39a693e4f020", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba", Pod:"coredns-7d764666f9-x8pnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa0efc8c7d8", MAC:"de:4c:e8:2d:97:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:38.495883 containerd[1996]: 2026-04-17 23:41:38.463 [INFO][5000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba" Namespace="kube-system" Pod="coredns-7d764666f9-x8pnz" WorkloadEndpoint="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:38.552256 containerd[1996]: time="2026-04-17T23:41:38.549759771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-qfvhl,Uid:c163f60a-efa5-4d8b-876e-506e8f185561,Namespace:calico-system,Attempt:1,} returns sandbox id \"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289\"" Apr 17 23:41:38.579042 systemd-networkd[1885]: cali879e9180a24: Gained IPv6LL Apr 17 23:41:38.587852 systemd[1]: Started cri-containerd-1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1.scope - libcontainer container 1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1. Apr 17 23:41:38.641216 containerd[1996]: time="2026-04-17T23:41:38.640766853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-tkjr8,Uid:0352e4c9-6574-48ee-8802-b8c6cbb3a9cc,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2\"" Apr 17 23:41:38.656981 containerd[1996]: time="2026-04-17T23:41:38.656358752Z" level=info msg="CreateContainer within sandbox \"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:41:38.754861 containerd[1996]: time="2026-04-17T23:41:38.724824788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:38.754861 containerd[1996]: time="2026-04-17T23:41:38.724900925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:38.754861 containerd[1996]: time="2026-04-17T23:41:38.724925099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.754861 containerd[1996]: time="2026-04-17T23:41:38.725267352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:38.757407 systemd[1]: Started cri-containerd-3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba.scope - libcontainer container 3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba. Apr 17 23:41:38.797560 containerd[1996]: time="2026-04-17T23:41:38.797514112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff76c96b9-qlvcl,Uid:7681fd73-755e-41c9-ae03-ddbe78c2fcd5,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d\"" Apr 17 23:41:38.834901 systemd-networkd[1885]: cali3031c89f48d: Gained IPv6LL Apr 17 23:41:38.852028 containerd[1996]: time="2026-04-17T23:41:38.851890820Z" level=info msg="CreateContainer within sandbox \"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b82efaf36721cfe594576bd43a0aa891836751f95e4e4a22a61229e29954f71\"" Apr 17 23:41:38.852573 containerd[1996]: time="2026-04-17T23:41:38.852536047Z" level=info msg="StartContainer for \"8b82efaf36721cfe594576bd43a0aa891836751f95e4e4a22a61229e29954f71\"" Apr 17 23:41:38.874711 containerd[1996]: time="2026-04-17T23:41:38.874665926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845ff8dcf7-2f8gg,Uid:c35def7d-4496-4fa2-b60e-d4ff32612dae,Namespace:calico-system,Attempt:1,} returns sandbox id \"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98\"" Apr 17 23:41:38.949465 kernel: calico-node[4892]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:41:38.984775 containerd[1996]: time="2026-04-17T23:41:38.984456370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-x8pnz,Uid:e385655f-a6bc-4d82-b34f-39a693e4f020,Namespace:kube-system,Attempt:1,} returns sandbox id \"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba\"" Apr 17 23:41:38.989748 systemd[1]: Started cri-containerd-8b82efaf36721cfe594576bd43a0aa891836751f95e4e4a22a61229e29954f71.scope - libcontainer container 8b82efaf36721cfe594576bd43a0aa891836751f95e4e4a22a61229e29954f71. Apr 17 23:41:39.000456 containerd[1996]: time="2026-04-17T23:41:39.000412649Z" level=info msg="CreateContainer within sandbox \"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:41:39.075986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217664857.mount: Deactivated successfully. Apr 17 23:41:39.087897 containerd[1996]: time="2026-04-17T23:41:39.078906736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rfp8t,Uid:d03f1c1f-c3ff-4e65-bb8e-b95383713725,Namespace:calico-system,Attempt:1,} returns sandbox id \"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1\"" Apr 17 23:41:39.104046 containerd[1996]: time="2026-04-17T23:41:39.103895044Z" level=info msg="CreateContainer within sandbox \"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1473e24b121aed26b8193b84446de3e039ca6187770b65663935dd7c223db3d\"" Apr 17 23:41:39.109881 containerd[1996]: time="2026-04-17T23:41:39.109755649Z" level=info msg="StartContainer for \"c1473e24b121aed26b8193b84446de3e039ca6187770b65663935dd7c223db3d\"" Apr 17 23:41:39.308652 containerd[1996]: time="2026-04-17T23:41:39.308293256Z" level=info msg="StartContainer for \"8b82efaf36721cfe594576bd43a0aa891836751f95e4e4a22a61229e29954f71\" returns successfully" Apr 17 23:41:39.402828 systemd[1]: Started cri-containerd-c1473e24b121aed26b8193b84446de3e039ca6187770b65663935dd7c223db3d.scope - libcontainer container c1473e24b121aed26b8193b84446de3e039ca6187770b65663935dd7c223db3d. Apr 17 23:41:39.472227 systemd-networkd[1885]: calid84fa5f7328: Link UP Apr 17 23:41:39.479962 systemd-networkd[1885]: calid84fa5f7328: Gained carrier Apr 17 23:41:39.541680 systemd-networkd[1885]: caliaa0efc8c7d8: Gained IPv6LL Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:38.818 [INFO][5359] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0 whisker-5669bb44bd- calico-system eb89fa77-3d82-4bca-b3a8-fca50d32a994 975 0 2026-04-17 23:41:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5669bb44bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-87 whisker-5669bb44bd-8wz55 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid84fa5f7328 [] [] }} ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:38.821 [INFO][5359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.207 [INFO][5448] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" HandleID="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Workload="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.240 [INFO][5448] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" HandleID="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Workload="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-87", "pod":"whisker-5669bb44bd-8wz55", "timestamp":"2026-04-17 23:41:39.207698917 +0000 UTC"}, Hostname:"ip-172-31-24-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00032e6e0)} Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.240 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.241 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.241 [INFO][5448] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-87' Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.247 [INFO][5448] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.260 [INFO][5448] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.300 [INFO][5448] ipam/ipam.go 526: Trying affinity for 192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.309 [INFO][5448] ipam/ipam.go 160: Attempting to load block cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.314 [INFO][5448] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.314 [INFO][5448] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.319 [INFO][5448] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.378 [INFO][5448] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.433 [INFO][5448] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.66.136/26] block=192.168.66.128/26 handle="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.433 [INFO][5448] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.66.136/26] handle="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" host="ip-172-31-24-87" Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.433 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:39.570333 containerd[1996]: 2026-04-17 23:41:39.433 [INFO][5448] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.66.136/26] IPv6=[] ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" HandleID="k8s-pod-network.7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Workload="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.453 [INFO][5359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0", GenerateName:"whisker-5669bb44bd-", Namespace:"calico-system", SelfLink:"", UID:"eb89fa77-3d82-4bca-b3a8-fca50d32a994", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5669bb44bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"", Pod:"whisker-5669bb44bd-8wz55", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid84fa5f7328", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.454 [INFO][5359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.136/32] ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.455 [INFO][5359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid84fa5f7328 ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.485 [INFO][5359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.492 [INFO][5359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0", GenerateName:"whisker-5669bb44bd-", Namespace:"calico-system", SelfLink:"", UID:"eb89fa77-3d82-4bca-b3a8-fca50d32a994", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5669bb44bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d", Pod:"whisker-5669bb44bd-8wz55", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid84fa5f7328", MAC:"ee:b4:ec:ce:24:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:39.576052 containerd[1996]: 2026-04-17 23:41:39.516 [INFO][5359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d" Namespace="calico-system" Pod="whisker-5669bb44bd-8wz55" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--5669bb44bd--8wz55-eth0" Apr 17 23:41:39.748479 containerd[1996]: time="2026-04-17T23:41:39.748414708Z" level=info msg="StartContainer for \"c1473e24b121aed26b8193b84446de3e039ca6187770b65663935dd7c223db3d\" returns successfully" Apr 17 23:41:39.859502 systemd-networkd[1885]: calie888b07c0c9: Gained IPv6LL Apr 17 23:41:39.865340 systemd-networkd[1885]: cali04379fab760: Gained IPv6LL Apr 17 23:41:39.990374 systemd-networkd[1885]: cali36b41feee3f: Gained IPv6LL Apr 17 23:41:40.567825 systemd-networkd[1885]: calid84fa5f7328: Gained IPv6LL Apr 17 23:41:40.824008 (udev-worker)[5049]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:41:40.834142 systemd-networkd[1885]: vxlan.calico: Link UP Apr 17 23:41:40.834551 systemd-networkd[1885]: vxlan.calico: Gained carrier Apr 17 23:41:40.881386 containerd[1996]: time="2026-04-17T23:41:40.878362269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:41:40.881386 containerd[1996]: time="2026-04-17T23:41:40.879950394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:41:40.881386 containerd[1996]: time="2026-04-17T23:41:40.880035502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:40.882783 containerd[1996]: time="2026-04-17T23:41:40.880619751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:41:40.966097 systemd[1]: run-containerd-runc-k8s.io-7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d-runc.cRGRxw.mount: Deactivated successfully. Apr 17 23:41:40.993812 systemd[1]: Started cri-containerd-7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d.scope - libcontainer container 7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d. Apr 17 23:41:41.078280 kubelet[3208]: I0417 23:41:41.055744 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-tkjr8" podStartSLOduration=46.037167632 podStartE2EDuration="46.037167632s" podCreationTimestamp="2026-04-17 23:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:41:41.019862849 +0000 UTC m=+51.594117303" watchObservedRunningTime="2026-04-17 23:41:41.037167632 +0000 UTC m=+51.611422086" Apr 17 23:41:41.080640 kubelet[3208]: I0417 23:41:41.080554 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-x8pnz" podStartSLOduration=46.080536814 podStartE2EDuration="46.080536814s" podCreationTimestamp="2026-04-17 23:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:41:41.078643002 +0000 UTC m=+51.652897448" watchObservedRunningTime="2026-04-17 23:41:41.080536814 +0000 UTC m=+51.654791265" Apr 17 23:41:41.223847 containerd[1996]: time="2026-04-17T23:41:41.223678193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5669bb44bd-8wz55,Uid:eb89fa77-3d82-4bca-b3a8-fca50d32a994,Namespace:calico-system,Attempt:0,} returns sandbox id \"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d\"" Apr 17 23:41:41.906739 systemd-networkd[1885]: vxlan.calico: Gained IPv6LL Apr 17 23:41:41.991786 containerd[1996]: time="2026-04-17T23:41:41.991659532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:41:42.006815 containerd[1996]: time="2026-04-17T23:41:42.005640839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.688323991s" Apr 17 23:41:42.006815 containerd[1996]: time="2026-04-17T23:41:42.005732956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:41:42.008321 containerd[1996]: time="2026-04-17T23:41:42.007508660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:41:42.028096 containerd[1996]: time="2026-04-17T23:41:42.028043221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:42.029499 containerd[1996]: time="2026-04-17T23:41:42.029394193Z" level=info msg="CreateContainer within sandbox \"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:41:42.032618 containerd[1996]: time="2026-04-17T23:41:42.032451041Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:42.033408 containerd[1996]: time="2026-04-17T23:41:42.033374357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:42.070253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343997193.mount: Deactivated successfully. Apr 17 23:41:42.077548 containerd[1996]: time="2026-04-17T23:41:42.077503470Z" level=info msg="CreateContainer within sandbox \"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"07529d39aba4cf60a464e26bd0b5d9374d99faa284a305a7c762dd0582d0c25c\"" Apr 17 23:41:42.079894 containerd[1996]: time="2026-04-17T23:41:42.078763773Z" level=info msg="StartContainer for \"07529d39aba4cf60a464e26bd0b5d9374d99faa284a305a7c762dd0582d0c25c\"" Apr 17 23:41:42.150871 systemd[1]: Started cri-containerd-07529d39aba4cf60a464e26bd0b5d9374d99faa284a305a7c762dd0582d0c25c.scope - libcontainer container 07529d39aba4cf60a464e26bd0b5d9374d99faa284a305a7c762dd0582d0c25c. Apr 17 23:41:42.192829 containerd[1996]: time="2026-04-17T23:41:42.192785620Z" level=info msg="StartContainer for \"07529d39aba4cf60a464e26bd0b5d9374d99faa284a305a7c762dd0582d0c25c\" returns successfully" Apr 17 23:41:44.558810 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.66.128:123 Apr 17 23:41:44.558903 ntpd[1957]: Listen normally on 9 cali75a74ef518c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.66.128:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 9 cali75a74ef518c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 10 cali879e9180a24 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 11 cali3031c89f48d [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 12 calie888b07c0c9 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 13 cali36b41feee3f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 14 cali04379fab760 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 15 caliaa0efc8c7d8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 16 calid84fa5f7328 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:41:44.563385 ntpd[1957]: 17 Apr 23:41:44 ntpd[1957]: Listen normally on 17 vxlan.calico [fe80::64ca:c7ff:fea5:fceb%12]:123 Apr 17 23:41:44.558960 ntpd[1957]: Listen normally on 10 cali879e9180a24 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:41:44.559001 ntpd[1957]: Listen normally on 11 cali3031c89f48d [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:41:44.559038 ntpd[1957]: Listen normally on 12 calie888b07c0c9 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 17 23:41:44.559075 ntpd[1957]: Listen normally on 13 cali36b41feee3f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:41:44.559106 ntpd[1957]: Listen normally on 14 cali04379fab760 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:41:44.559143 ntpd[1957]: Listen normally on 15 caliaa0efc8c7d8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:41:44.559171 ntpd[1957]: Listen normally on 16 calid84fa5f7328 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:41:44.559202 ntpd[1957]: Listen normally on 17 vxlan.calico [fe80::64ca:c7ff:fea5:fceb%12]:123 Apr 17 23:41:45.399610 containerd[1996]: time="2026-04-17T23:41:45.399558371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:45.401576 containerd[1996]: time="2026-04-17T23:41:45.401413066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:41:45.403754 containerd[1996]: time="2026-04-17T23:41:45.403694675Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:45.407493 containerd[1996]: time="2026-04-17T23:41:45.407457781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:45.408491 containerd[1996]: time="2026-04-17T23:41:45.408347789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.400790752s" Apr 17 23:41:45.408491 containerd[1996]: time="2026-04-17T23:41:45.408387336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:41:45.410585 containerd[1996]: time="2026-04-17T23:41:45.410397144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:41:45.416724 containerd[1996]: time="2026-04-17T23:41:45.416690144Z" level=info msg="CreateContainer within sandbox \"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:41:45.467129 containerd[1996]: time="2026-04-17T23:41:45.467073669Z" level=info msg="CreateContainer within sandbox \"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29983ec5a6957e9fbfb086b399507794df396ca35a77a84f37ea74d62d9f4c4e\"" Apr 17 23:41:45.468013 containerd[1996]: time="2026-04-17T23:41:45.467973023Z" level=info msg="StartContainer for \"29983ec5a6957e9fbfb086b399507794df396ca35a77a84f37ea74d62d9f4c4e\"" Apr 17 23:41:45.552966 systemd[1]: Started cri-containerd-29983ec5a6957e9fbfb086b399507794df396ca35a77a84f37ea74d62d9f4c4e.scope - libcontainer container 29983ec5a6957e9fbfb086b399507794df396ca35a77a84f37ea74d62d9f4c4e. Apr 17 23:41:45.625547 containerd[1996]: time="2026-04-17T23:41:45.625495480Z" level=info msg="StartContainer for \"29983ec5a6957e9fbfb086b399507794df396ca35a77a84f37ea74d62d9f4c4e\" returns successfully" Apr 17 23:41:46.140010 kubelet[3208]: I0417 23:41:46.139932 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-845ff8dcf7-qfvhl" podStartSLOduration=29.357086816 podStartE2EDuration="36.139887733s" podCreationTimestamp="2026-04-17 23:41:10 +0000 UTC" firstStartedPulling="2026-04-17 23:41:38.626961407 +0000 UTC m=+49.201215840" lastFinishedPulling="2026-04-17 23:41:45.409762306 +0000 UTC m=+55.984016757" observedRunningTime="2026-04-17 23:41:46.136707628 +0000 UTC m=+56.710962082" watchObservedRunningTime="2026-04-17 23:41:46.139887733 +0000 UTC m=+56.714142187" Apr 17 23:41:47.110702 kubelet[3208]: I0417 23:41:47.110658 3208 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:41:48.659716 systemd[1]: Started sshd@7-172.31.24.87:22-20.229.252.112:57218.service - OpenSSH per-connection server daemon (20.229.252.112:57218). Apr 17 23:41:49.794562 sshd[5808]: Accepted publickey for core from 20.229.252.112 port 57218 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:41:49.802341 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:49.831370 systemd-logind[1962]: New session 8 of user core. Apr 17 23:41:49.847664 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:41:50.128421 containerd[1996]: time="2026-04-17T23:41:50.128152076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:50.132005 containerd[1996]: time="2026-04-17T23:41:50.131935533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:41:50.135886 containerd[1996]: time="2026-04-17T23:41:50.135843425Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:50.141111 containerd[1996]: time="2026-04-17T23:41:50.141051175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:50.142675 containerd[1996]: time="2026-04-17T23:41:50.142523821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.732083895s" Apr 17 23:41:50.142675 containerd[1996]: time="2026-04-17T23:41:50.142573313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:41:50.316457 containerd[1996]: time="2026-04-17T23:41:50.316413326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:41:50.493998 containerd[1996]: time="2026-04-17T23:41:50.492459235Z" level=info msg="StopPodSandbox for \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\"" Apr 17 23:41:50.589385 containerd[1996]: time="2026-04-17T23:41:50.589348449Z" level=info msg="CreateContainer within sandbox \"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:41:50.623927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485818294.mount: Deactivated successfully. Apr 17 23:41:50.671488 containerd[1996]: time="2026-04-17T23:41:50.671437025Z" level=info msg="CreateContainer within sandbox \"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd\"" Apr 17 23:41:50.673869 containerd[1996]: time="2026-04-17T23:41:50.673354419Z" level=info msg="StartContainer for \"948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd\"" Apr 17 23:41:50.716175 containerd[1996]: time="2026-04-17T23:41:50.716134634Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:50.720188 containerd[1996]: time="2026-04-17T23:41:50.719731823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:41:50.726676 containerd[1996]: time="2026-04-17T23:41:50.725985974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 409.320553ms" Apr 17 23:41:50.726876 containerd[1996]: time="2026-04-17T23:41:50.726853975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:41:50.731612 containerd[1996]: time="2026-04-17T23:41:50.731549771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:41:50.760012 containerd[1996]: time="2026-04-17T23:41:50.759900416Z" level=info msg="CreateContainer within sandbox \"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:41:50.821574 containerd[1996]: time="2026-04-17T23:41:50.821527062Z" level=info msg="CreateContainer within sandbox \"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"19de92a453259fd10a003ee9ab996d9441e9655e2f11d2349962e1b52c4ad3a4\"" Apr 17 23:41:50.824892 containerd[1996]: time="2026-04-17T23:41:50.824803418Z" level=info msg="StartContainer for \"19de92a453259fd10a003ee9ab996d9441e9655e2f11d2349962e1b52c4ad3a4\"" Apr 17 23:41:51.000586 systemd[1]: Started cri-containerd-948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd.scope - libcontainer container 948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd. Apr 17 23:41:51.010888 systemd[1]: Started cri-containerd-19de92a453259fd10a003ee9ab996d9441e9655e2f11d2349962e1b52c4ad3a4.scope - libcontainer container 19de92a453259fd10a003ee9ab996d9441e9655e2f11d2349962e1b52c4ad3a4. Apr 17 23:41:51.315468 containerd[1996]: time="2026-04-17T23:41:51.315144231Z" level=info msg="StartContainer for \"948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd\" returns successfully" Apr 17 23:41:51.315468 containerd[1996]: time="2026-04-17T23:41:51.315245013Z" level=info msg="StartContainer for \"19de92a453259fd10a003ee9ab996d9441e9655e2f11d2349962e1b52c4ad3a4\" returns successfully" Apr 17 23:41:52.127899 sshd[5808]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:52.154413 systemd[1]: sshd@7-172.31.24.87:22-20.229.252.112:57218.service: Deactivated successfully. Apr 17 23:41:52.179149 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:41:52.187775 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:41:52.190818 systemd-logind[1962]: Removed session 8. Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:51.513 [WARNING][5831] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0", GenerateName:"calico-kube-controllers-7ff76c96b9-", Namespace:"calico-system", SelfLink:"", UID:"7681fd73-755e-41c9-ae03-ddbe78c2fcd5", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff76c96b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d", Pod:"calico-kube-controllers-7ff76c96b9-qlvcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3031c89f48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:51.520 [INFO][5831] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:51.521 [INFO][5831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" iface="eth0" netns="" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:51.521 [INFO][5831] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:51.521 [INFO][5831] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.402 [INFO][5918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.409 [INFO][5918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.410 [INFO][5918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.440 [WARNING][5918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.440 [INFO][5918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.442 [INFO][5918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:52.506547 containerd[1996]: 2026-04-17 23:41:52.452 [INFO][5831] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:52.525560 containerd[1996]: time="2026-04-17T23:41:52.524624503Z" level=info msg="TearDown network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" successfully" Apr 17 23:41:52.525560 containerd[1996]: time="2026-04-17T23:41:52.524706840Z" level=info msg="StopPodSandbox for \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" returns successfully" Apr 17 23:41:52.798105 kubelet[3208]: I0417 23:41:52.766704 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ff76c96b9-qlvcl" podStartSLOduration=30.258369355 podStartE2EDuration="41.716081763s" podCreationTimestamp="2026-04-17 23:41:11 +0000 UTC" firstStartedPulling="2026-04-17 23:41:38.82903568 +0000 UTC m=+49.403290131" lastFinishedPulling="2026-04-17 23:41:50.286748086 +0000 UTC m=+60.861002539" observedRunningTime="2026-04-17 23:41:52.704549696 +0000 UTC m=+63.278804150" watchObservedRunningTime="2026-04-17 23:41:52.716081763 +0000 UTC m=+63.290336211" Apr 17 23:41:52.855954 kubelet[3208]: I0417 23:41:52.853783 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-845ff8dcf7-2f8gg" podStartSLOduration=31.003497135 podStartE2EDuration="42.853502405s" podCreationTimestamp="2026-04-17 23:41:10 +0000 UTC" firstStartedPulling="2026-04-17 23:41:38.879791496 +0000 UTC m=+49.454045940" lastFinishedPulling="2026-04-17 23:41:50.729796752 +0000 UTC m=+61.304051210" observedRunningTime="2026-04-17 23:41:52.852329132 +0000 UTC m=+63.426583584" watchObservedRunningTime="2026-04-17 23:41:52.853502405 +0000 UTC m=+63.427756856" Apr 17 23:41:53.150430 containerd[1996]: time="2026-04-17T23:41:53.150287162Z" level=info msg="RemovePodSandbox for \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\"" Apr 17 23:41:53.161623 containerd[1996]: time="2026-04-17T23:41:53.161544628Z" level=info msg="Forcibly stopping sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\"" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.634 [WARNING][5963] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0", GenerateName:"calico-kube-controllers-7ff76c96b9-", Namespace:"calico-system", SelfLink:"", UID:"7681fd73-755e-41c9-ae03-ddbe78c2fcd5", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff76c96b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"fb033e270b875cfc85cef2b1f39fa1b0aa80abf251902f8076bc723a4a458d0d", Pod:"calico-kube-controllers-7ff76c96b9-qlvcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3031c89f48d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.634 [INFO][5963] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.634 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" iface="eth0" netns="" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.634 [INFO][5963] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.634 [INFO][5963] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.673 [INFO][5970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.673 [INFO][5970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.674 [INFO][5970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.691 [WARNING][5970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.691 [INFO][5970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" HandleID="k8s-pod-network.0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Workload="ip--172--31--24--87-k8s-calico--kube--controllers--7ff76c96b9--qlvcl-eth0" Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.693 [INFO][5970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:53.703721 containerd[1996]: 2026-04-17 23:41:53.698 [INFO][5963] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b" Apr 17 23:41:53.707095 containerd[1996]: time="2026-04-17T23:41:53.703755692Z" level=info msg="TearDown network for sandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" successfully" Apr 17 23:41:53.775521 containerd[1996]: time="2026-04-17T23:41:53.775334824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:53.795338 containerd[1996]: time="2026-04-17T23:41:53.795215404Z" level=info msg="RemovePodSandbox \"0206c78d3c7fb4eff6e11f55fcca1dec2c9c6a349678901203fdbb71a690f59b\" returns successfully" Apr 17 23:41:53.842036 containerd[1996]: time="2026-04-17T23:41:53.841868358Z" level=info msg="StopPodSandbox for \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\"" Apr 17 23:41:54.083357 systemd[1]: run-containerd-runc-k8s.io-948172d0639b21f316e2e03aca9a8bdd948494bf151b40a8223f204a04dc0ebd-runc.C8wy9p.mount: Deactivated successfully. Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.000 [WARNING][5985] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.000 [INFO][5985] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.000 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" iface="eth0" netns="" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.000 [INFO][5985] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.000 [INFO][5985] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.103 [INFO][5996] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.103 [INFO][5996] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.103 [INFO][5996] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.118 [WARNING][5996] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.119 [INFO][5996] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.121 [INFO][5996] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:54.134285 containerd[1996]: 2026-04-17 23:41:54.126 [INFO][5985] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.135692 containerd[1996]: time="2026-04-17T23:41:54.134343284Z" level=info msg="TearDown network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" successfully" Apr 17 23:41:54.135692 containerd[1996]: time="2026-04-17T23:41:54.134374064Z" level=info msg="StopPodSandbox for \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" returns successfully" Apr 17 23:41:54.142360 containerd[1996]: time="2026-04-17T23:41:54.141267339Z" level=info msg="RemovePodSandbox for \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\"" Apr 17 23:41:54.142360 containerd[1996]: time="2026-04-17T23:41:54.141321348Z" level=info msg="Forcibly stopping sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\"" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.247 [WARNING][6022] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" WorkloadEndpoint="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.248 [INFO][6022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.248 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" iface="eth0" netns="" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.248 [INFO][6022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.248 [INFO][6022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.330 [INFO][6032] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.330 [INFO][6032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.330 [INFO][6032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.341 [WARNING][6032] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.341 [INFO][6032] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" HandleID="k8s-pod-network.3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Workload="ip--172--31--24--87-k8s-whisker--759f44599--zm69g-eth0" Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.344 [INFO][6032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:54.351563 containerd[1996]: 2026-04-17 23:41:54.347 [INFO][6022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90" Apr 17 23:41:54.351563 containerd[1996]: time="2026-04-17T23:41:54.351156827Z" level=info msg="TearDown network for sandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" successfully" Apr 17 23:41:54.376341 containerd[1996]: time="2026-04-17T23:41:54.376158462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:54.376341 containerd[1996]: time="2026-04-17T23:41:54.376256725Z" level=info msg="RemovePodSandbox \"3e4a15c86e063dee36f77da701f33a7ee8fba3c3bd2f8be99ef89310811afd90\" returns successfully" Apr 17 23:41:54.377396 containerd[1996]: time="2026-04-17T23:41:54.376877917Z" level=info msg="StopPodSandbox for \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\"" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.475 [WARNING][6046] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e90ce8-ae92-43fe-bead-1cd18e86c253", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d", Pod:"csi-node-driver-jfd6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75a74ef518c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.476 [INFO][6046] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.476 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" iface="eth0" netns="" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.476 [INFO][6046] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.476 [INFO][6046] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.533 [INFO][6053] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.533 [INFO][6053] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.533 [INFO][6053] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.550 [WARNING][6053] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.550 [INFO][6053] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.554 [INFO][6053] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:54.561092 containerd[1996]: 2026-04-17 23:41:54.557 [INFO][6046] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.562196 containerd[1996]: time="2026-04-17T23:41:54.561850440Z" level=info msg="TearDown network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" successfully" Apr 17 23:41:54.562196 containerd[1996]: time="2026-04-17T23:41:54.561889851Z" level=info msg="StopPodSandbox for \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" returns successfully" Apr 17 23:41:54.564269 containerd[1996]: time="2026-04-17T23:41:54.563779886Z" level=info msg="RemovePodSandbox for \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\"" Apr 17 23:41:54.564269 containerd[1996]: time="2026-04-17T23:41:54.563822100Z" level=info msg="Forcibly stopping sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\"" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.642 [WARNING][6067] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e90ce8-ae92-43fe-bead-1cd18e86c253", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d", Pod:"csi-node-driver-jfd6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75a74ef518c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.643 [INFO][6067] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.643 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" iface="eth0" netns="" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.644 [INFO][6067] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.644 [INFO][6067] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.728 [INFO][6074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.729 [INFO][6074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.729 [INFO][6074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.751 [WARNING][6074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.751 [INFO][6074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" HandleID="k8s-pod-network.bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Workload="ip--172--31--24--87-k8s-csi--node--driver--jfd6r-eth0" Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.755 [INFO][6074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:54.772266 containerd[1996]: 2026-04-17 23:41:54.762 [INFO][6067] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6" Apr 17 23:41:54.775376 containerd[1996]: time="2026-04-17T23:41:54.772296245Z" level=info msg="TearDown network for sandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" successfully" Apr 17 23:41:54.930678 kubelet[3208]: I0417 23:41:54.916464 3208 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:41:55.058025 containerd[1996]: time="2026-04-17T23:41:55.056695433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:55.058025 containerd[1996]: time="2026-04-17T23:41:55.056787569Z" level=info msg="RemovePodSandbox \"bc0090ad8e517acb32407f6be62db5dbc63eca224e8032796381aaac6b4087a6\" returns successfully" Apr 17 23:41:55.078973 containerd[1996]: time="2026-04-17T23:41:55.078931095Z" level=info msg="StopPodSandbox for \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\"" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.224 [WARNING][6088] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"d03f1c1f-c3ff-4e65-bb8e-b95383713725", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1", Pod:"goldmane-9f7667bb8-rfp8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36b41feee3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.225 [INFO][6088] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.225 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" iface="eth0" netns="" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.225 [INFO][6088] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.225 [INFO][6088] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.348 [INFO][6095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.348 [INFO][6095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.349 [INFO][6095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.363 [WARNING][6095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.364 [INFO][6095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.375 [INFO][6095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:55.390441 containerd[1996]: 2026-04-17 23:41:55.378 [INFO][6088] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.390441 containerd[1996]: time="2026-04-17T23:41:55.390368370Z" level=info msg="TearDown network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" successfully" Apr 17 23:41:55.390441 containerd[1996]: time="2026-04-17T23:41:55.390401552Z" level=info msg="StopPodSandbox for \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" returns successfully" Apr 17 23:41:55.425795 containerd[1996]: time="2026-04-17T23:41:55.425641095Z" level=info msg="RemovePodSandbox for \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\"" Apr 17 23:41:55.425795 containerd[1996]: time="2026-04-17T23:41:55.425742844Z" level=info msg="Forcibly stopping sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\"" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.598 [WARNING][6114] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"d03f1c1f-c3ff-4e65-bb8e-b95383713725", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1", Pod:"goldmane-9f7667bb8-rfp8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36b41feee3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.598 [INFO][6114] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.598 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" iface="eth0" netns="" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.598 [INFO][6114] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.598 [INFO][6114] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.684 [INFO][6122] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.685 [INFO][6122] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.685 [INFO][6122] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.700 [WARNING][6122] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.700 [INFO][6122] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" HandleID="k8s-pod-network.452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Workload="ip--172--31--24--87-k8s-goldmane--9f7667bb8--rfp8t-eth0" Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.702 [INFO][6122] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:55.716973 containerd[1996]: 2026-04-17 23:41:55.707 [INFO][6114] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18" Apr 17 23:41:55.721539 containerd[1996]: time="2026-04-17T23:41:55.719102542Z" level=info msg="TearDown network for sandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" successfully" Apr 17 23:41:55.734342 containerd[1996]: time="2026-04-17T23:41:55.734149863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:55.734342 containerd[1996]: time="2026-04-17T23:41:55.734223896Z" level=info msg="RemovePodSandbox \"452ee1138bf1952cd62bbd733f9486b11267244da79c48d85843d925e1a1df18\" returns successfully" Apr 17 23:41:55.736629 containerd[1996]: time="2026-04-17T23:41:55.736174045Z" level=info msg="StopPodSandbox for \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\"" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.837 [WARNING][6138] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c35def7d-4496-4fa2-b60e-d4ff32612dae", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98", Pod:"calico-apiserver-845ff8dcf7-2f8gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04379fab760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.839 [INFO][6138] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.839 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" iface="eth0" netns="" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.839 [INFO][6138] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.839 [INFO][6138] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.909 [INFO][6146] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.909 [INFO][6146] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.909 [INFO][6146] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.922 [WARNING][6146] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.922 [INFO][6146] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.927 [INFO][6146] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:55.947328 containerd[1996]: 2026-04-17 23:41:55.938 [INFO][6138] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:55.952421 containerd[1996]: time="2026-04-17T23:41:55.948310767Z" level=info msg="TearDown network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" successfully" Apr 17 23:41:55.952421 containerd[1996]: time="2026-04-17T23:41:55.948346277Z" level=info msg="StopPodSandbox for \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" returns successfully" Apr 17 23:41:55.952421 containerd[1996]: time="2026-04-17T23:41:55.950121667Z" level=info msg="RemovePodSandbox for \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\"" Apr 17 23:41:55.952421 containerd[1996]: time="2026-04-17T23:41:55.950160154Z" level=info msg="Forcibly stopping sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\"" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.087 [WARNING][6161] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c35def7d-4496-4fa2-b60e-d4ff32612dae", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"dca99e704cda8ce1f0a3808fb174b600408a7f77393911545ff9d659044e9a98", Pod:"calico-apiserver-845ff8dcf7-2f8gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04379fab760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.087 [INFO][6161] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.087 [INFO][6161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" iface="eth0" netns="" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.087 [INFO][6161] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.087 [INFO][6161] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.152 [INFO][6168] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.152 [INFO][6168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.152 [INFO][6168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.163 [WARNING][6168] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.163 [INFO][6168] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" HandleID="k8s-pod-network.eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--2f8gg-eth0" Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.173 [INFO][6168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:56.194349 containerd[1996]: 2026-04-17 23:41:56.187 [INFO][6161] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b" Apr 17 23:41:56.194349 containerd[1996]: time="2026-04-17T23:41:56.193469501Z" level=info msg="TearDown network for sandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" successfully" Apr 17 23:41:56.215750 containerd[1996]: time="2026-04-17T23:41:56.215701760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:56.215996 containerd[1996]: time="2026-04-17T23:41:56.215969862Z" level=info msg="RemovePodSandbox \"eb5eabf17b095dc491511e148d50a682e97f4349e390e777f400f242767f1e1b\" returns successfully" Apr 17 23:41:56.220736 containerd[1996]: time="2026-04-17T23:41:56.220698860Z" level=info msg="StopPodSandbox for \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\"" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.442 [WARNING][6183] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c163f60a-efa5-4d8b-876e-506e8f185561", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289", Pod:"calico-apiserver-845ff8dcf7-qfvhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali879e9180a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.442 [INFO][6183] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.442 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" iface="eth0" netns="" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.442 [INFO][6183] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.443 [INFO][6183] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.522 [INFO][6191] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.525 [INFO][6191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.525 [INFO][6191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.542 [WARNING][6191] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.543 [INFO][6191] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.545 [INFO][6191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:56.560196 containerd[1996]: 2026-04-17 23:41:56.553 [INFO][6183] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.560196 containerd[1996]: time="2026-04-17T23:41:56.559662000Z" level=info msg="TearDown network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" successfully" Apr 17 23:41:56.560196 containerd[1996]: time="2026-04-17T23:41:56.559698267Z" level=info msg="StopPodSandbox for \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" returns successfully" Apr 17 23:41:56.563014 containerd[1996]: time="2026-04-17T23:41:56.561710889Z" level=info msg="RemovePodSandbox for \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\"" Apr 17 23:41:56.563014 containerd[1996]: time="2026-04-17T23:41:56.561749234Z" level=info msg="Forcibly stopping sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\"" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.700 [WARNING][6205] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0", GenerateName:"calico-apiserver-845ff8dcf7-", Namespace:"calico-system", SelfLink:"", UID:"c163f60a-efa5-4d8b-876e-506e8f185561", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 41, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845ff8dcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"16e0fb85c5c9e50193e181a26213d93e42825101fa7532202f35fe0dbcdf2289", Pod:"calico-apiserver-845ff8dcf7-qfvhl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali879e9180a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.704 [INFO][6205] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.704 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" iface="eth0" netns="" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.705 [INFO][6205] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.708 [INFO][6205] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.806 [INFO][6212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.807 [INFO][6212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.807 [INFO][6212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.819 [WARNING][6212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.819 [INFO][6212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" HandleID="k8s-pod-network.b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Workload="ip--172--31--24--87-k8s-calico--apiserver--845ff8dcf7--qfvhl-eth0" Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.823 [INFO][6212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:56.851253 containerd[1996]: 2026-04-17 23:41:56.836 [INFO][6205] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753" Apr 17 23:41:56.851253 containerd[1996]: time="2026-04-17T23:41:56.850142593Z" level=info msg="TearDown network for sandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" successfully" Apr 17 23:41:56.861240 containerd[1996]: time="2026-04-17T23:41:56.860718942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:56.861240 containerd[1996]: time="2026-04-17T23:41:56.860804939Z" level=info msg="RemovePodSandbox \"b6fa305c57db2266c5492436bb38f1630256f8be9849ae8357aa19ab89d8b753\" returns successfully" Apr 17 23:41:56.862094 containerd[1996]: time="2026-04-17T23:41:56.861659959Z" level=info msg="StopPodSandbox for \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\"" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:56.937 [WARNING][6226] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2", Pod:"coredns-7d764666f9-tkjr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie888b07c0c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:56.940 [INFO][6226] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:56.940 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" iface="eth0" netns="" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:56.940 [INFO][6226] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:56.940 [INFO][6226] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.041 [INFO][6235] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.041 [INFO][6235] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.041 [INFO][6235] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.055 [WARNING][6235] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.055 [INFO][6235] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.057 [INFO][6235] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:57.069389 containerd[1996]: 2026-04-17 23:41:57.064 [INFO][6226] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.070348 containerd[1996]: time="2026-04-17T23:41:57.070302711Z" level=info msg="TearDown network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" successfully" Apr 17 23:41:57.070348 containerd[1996]: time="2026-04-17T23:41:57.070338342Z" level=info msg="StopPodSandbox for \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" returns successfully" Apr 17 23:41:57.070956 containerd[1996]: time="2026-04-17T23:41:57.070914726Z" level=info msg="RemovePodSandbox for \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\"" Apr 17 23:41:57.070956 containerd[1996]: time="2026-04-17T23:41:57.070950317Z" level=info msg="Forcibly stopping sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\"" Apr 17 23:41:57.084813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027459368.mount: Deactivated successfully. Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.163 [WARNING][6250] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0352e4c9-6574-48ee-8802-b8c6cbb3a9cc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"ff7223dcca7440876baf341a1634c5618dabdba4b48ecbd387015bb0aa7a07a2", Pod:"coredns-7d764666f9-tkjr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie888b07c0c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.164 [INFO][6250] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.164 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" iface="eth0" netns="" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.164 [INFO][6250] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.164 [INFO][6250] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.197 [INFO][6257] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.197 [INFO][6257] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.198 [INFO][6257] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.206 [WARNING][6257] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.206 [INFO][6257] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" HandleID="k8s-pod-network.c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--tkjr8-eth0" Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.209 [INFO][6257] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:57.216869 containerd[1996]: 2026-04-17 23:41:57.212 [INFO][6250] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e" Apr 17 23:41:57.216869 containerd[1996]: time="2026-04-17T23:41:57.215375884Z" level=info msg="TearDown network for sandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" successfully" Apr 17 23:41:57.237899 containerd[1996]: time="2026-04-17T23:41:57.237839910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:57.238060 containerd[1996]: time="2026-04-17T23:41:57.237925847Z" level=info msg="RemovePodSandbox \"c248476119240d17de75c2c744f0884c9d76c56c4ae4445ddd6f0fe5ec993e7e\" returns successfully" Apr 17 23:41:57.251248 containerd[1996]: time="2026-04-17T23:41:57.250585861Z" level=info msg="StopPodSandbox for \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\"" Apr 17 23:41:57.346149 systemd[1]: Started sshd@8-172.31.24.87:22-20.229.252.112:59866.service - OpenSSH per-connection server daemon (20.229.252.112:59866). Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.461 [WARNING][6275] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e385655f-a6bc-4d82-b34f-39a693e4f020", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba", Pod:"coredns-7d764666f9-x8pnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa0efc8c7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.461 [INFO][6275] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.462 [INFO][6275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" iface="eth0" netns="" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.462 [INFO][6275] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.462 [INFO][6275] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.522 [INFO][6284] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.523 [INFO][6284] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.523 [INFO][6284] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.538 [WARNING][6284] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.538 [INFO][6284] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.541 [INFO][6284] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:57.553277 containerd[1996]: 2026-04-17 23:41:57.547 [INFO][6275] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.553277 containerd[1996]: time="2026-04-17T23:41:57.551363265Z" level=info msg="TearDown network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" successfully" Apr 17 23:41:57.553277 containerd[1996]: time="2026-04-17T23:41:57.551395501Z" level=info msg="StopPodSandbox for \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" returns successfully" Apr 17 23:41:57.553277 containerd[1996]: time="2026-04-17T23:41:57.552070732Z" level=info msg="RemovePodSandbox for \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\"" Apr 17 23:41:57.553277 containerd[1996]: time="2026-04-17T23:41:57.552106712Z" level=info msg="Forcibly stopping sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\"" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.672 [WARNING][6299] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e385655f-a6bc-4d82-b34f-39a693e4f020", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-87", ContainerID:"3d222a32b31fb7fb43afa28c8e994d0b94a4fbf50d4b8577c30665f99e0b3dba", Pod:"coredns-7d764666f9-x8pnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa0efc8c7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.672 [INFO][6299] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.672 [INFO][6299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" iface="eth0" netns="" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.672 [INFO][6299] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.672 [INFO][6299] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.743 [INFO][6306] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.743 [INFO][6306] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.743 [INFO][6306] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.766 [WARNING][6306] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.766 [INFO][6306] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" HandleID="k8s-pod-network.22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Workload="ip--172--31--24--87-k8s-coredns--7d764666f9--x8pnz-eth0" Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.769 [INFO][6306] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:57.793016 containerd[1996]: 2026-04-17 23:41:57.782 [INFO][6299] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab" Apr 17 23:41:57.795681 containerd[1996]: time="2026-04-17T23:41:57.793068584Z" level=info msg="TearDown network for sandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" successfully" Apr 17 23:41:57.819940 containerd[1996]: time="2026-04-17T23:41:57.819764660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:57.819940 containerd[1996]: time="2026-04-17T23:41:57.819849954Z" level=info msg="RemovePodSandbox \"22ce55fe1f334a3e68809471e5248cb20eb7b9eaafe1c67d3bd2ccf9962748ab\" returns successfully" Apr 17 23:41:58.518539 sshd[6281]: Accepted publickey for core from 20.229.252.112 port 59866 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:41:58.525175 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:58.555426 systemd-logind[1962]: New session 9 of user core. Apr 17 23:41:58.559856 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:41:58.700487 containerd[1996]: time="2026-04-17T23:41:58.642037963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:41:58.736091 containerd[1996]: time="2026-04-17T23:41:58.736032513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:58.791082 containerd[1996]: time="2026-04-17T23:41:58.790930520Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:58.795662 containerd[1996]: time="2026-04-17T23:41:58.795097499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:41:58.797643 containerd[1996]: time="2026-04-17T23:41:58.796847765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 8.061772882s" Apr 17 23:41:58.804287 containerd[1996]: time="2026-04-17T23:41:58.804232520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:41:58.917252 containerd[1996]: time="2026-04-17T23:41:58.917198854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:41:59.301285 containerd[1996]: time="2026-04-17T23:41:59.301193476Z" level=info msg="CreateContainer within sandbox \"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:41:59.510468 containerd[1996]: time="2026-04-17T23:41:59.510427817Z" level=info msg="CreateContainer within sandbox \"1bc06a83845b2ec6d11223c9afc17e46ec2b65baf8406e62b6fb7ef0b4b1f5e1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79\"" Apr 17 23:41:59.515395 containerd[1996]: time="2026-04-17T23:41:59.515355897Z" level=info msg="StartContainer for \"cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79\"" Apr 17 23:42:00.278843 systemd[1]: Started cri-containerd-cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79.scope - libcontainer container cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79. Apr 17 23:42:00.330112 sshd[6281]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:00.343177 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:42:00.344044 systemd[1]: sshd@8-172.31.24.87:22-20.229.252.112:59866.service: Deactivated successfully. Apr 17 23:42:00.350188 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:42:00.361372 systemd-logind[1962]: Removed session 9. Apr 17 23:42:00.430081 containerd[1996]: time="2026-04-17T23:42:00.430037094Z" level=info msg="StartContainer for \"cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79\" returns successfully" Apr 17 23:42:00.669028 containerd[1996]: time="2026-04-17T23:42:00.668975964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:00.671324 containerd[1996]: time="2026-04-17T23:42:00.671253678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:42:00.673907 containerd[1996]: time="2026-04-17T23:42:00.673534123Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:00.677838 containerd[1996]: time="2026-04-17T23:42:00.677642886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:00.678949 containerd[1996]: time="2026-04-17T23:42:00.678723664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.761426003s" Apr 17 23:42:00.679160 containerd[1996]: time="2026-04-17T23:42:00.678779535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:42:00.680750 containerd[1996]: time="2026-04-17T23:42:00.680719698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:42:00.698985 containerd[1996]: time="2026-04-17T23:42:00.698940177Z" level=info msg="CreateContainer within sandbox \"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:42:00.739779 containerd[1996]: time="2026-04-17T23:42:00.739728911Z" level=info msg="CreateContainer within sandbox \"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a23e8437ce7d58a953a94b94ae6c0c37b2776d3a1079b511926db89e6815040a\"" Apr 17 23:42:00.741215 containerd[1996]: time="2026-04-17T23:42:00.740434746Z" level=info msg="StartContainer for \"a23e8437ce7d58a953a94b94ae6c0c37b2776d3a1079b511926db89e6815040a\"" Apr 17 23:42:00.803023 systemd[1]: Started cri-containerd-a23e8437ce7d58a953a94b94ae6c0c37b2776d3a1079b511926db89e6815040a.scope - libcontainer container a23e8437ce7d58a953a94b94ae6c0c37b2776d3a1079b511926db89e6815040a. Apr 17 23:42:00.895786 containerd[1996]: time="2026-04-17T23:42:00.894182124Z" level=info msg="StartContainer for \"a23e8437ce7d58a953a94b94ae6c0c37b2776d3a1079b511926db89e6815040a\" returns successfully" Apr 17 23:42:02.211125 kubelet[3208]: I0417 23:42:02.119007 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-rfp8t" podStartSLOduration=32.292214647 podStartE2EDuration="52.092937845s" podCreationTimestamp="2026-04-17 23:41:10 +0000 UTC" firstStartedPulling="2026-04-17 23:41:39.11016587 +0000 UTC m=+49.684420315" lastFinishedPulling="2026-04-17 23:41:58.910889083 +0000 UTC m=+69.485143513" observedRunningTime="2026-04-17 23:42:02.061199922 +0000 UTC m=+72.635454374" watchObservedRunningTime="2026-04-17 23:42:02.092937845 +0000 UTC m=+72.667192298" Apr 17 23:42:03.332262 containerd[1996]: time="2026-04-17T23:42:03.332119748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:03.336358 containerd[1996]: time="2026-04-17T23:42:03.335658579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:42:03.338708 containerd[1996]: time="2026-04-17T23:42:03.338052248Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:03.345615 containerd[1996]: time="2026-04-17T23:42:03.342751476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:03.345615 containerd[1996]: time="2026-04-17T23:42:03.343698472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.662936884s" Apr 17 23:42:03.345615 containerd[1996]: time="2026-04-17T23:42:03.343745364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:42:03.348188 containerd[1996]: time="2026-04-17T23:42:03.348057495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:42:03.358143 containerd[1996]: time="2026-04-17T23:42:03.358092098Z" level=info msg="CreateContainer within sandbox \"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:42:03.385181 containerd[1996]: time="2026-04-17T23:42:03.385049190Z" level=info msg="CreateContainer within sandbox \"9cfeac2c93fa218b6afdd106c82c715a635201c75f38091f1d0188192396d71d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88\"" Apr 17 23:42:03.390282 containerd[1996]: time="2026-04-17T23:42:03.389892161Z" level=info msg="StartContainer for \"c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88\"" Apr 17 23:42:03.478135 systemd[1]: run-containerd-runc-k8s.io-c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88-runc.EH8TKz.mount: Deactivated successfully. Apr 17 23:42:03.488860 systemd[1]: Started cri-containerd-c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88.scope - libcontainer container c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88. Apr 17 23:42:03.532878 containerd[1996]: time="2026-04-17T23:42:03.532672000Z" level=info msg="StartContainer for \"c33f16bef64f1467033872c7ee81a8f77cbfb0b4dfd4742e013d7cb9255bbc88\" returns successfully" Apr 17 23:42:04.198382 kubelet[3208]: I0417 23:42:04.198312 3208 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:42:04.204068 kubelet[3208]: I0417 23:42:04.204025 3208 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:42:05.535145 systemd[1]: Started sshd@9-172.31.24.87:22-20.229.252.112:52668.service - OpenSSH per-connection server daemon (20.229.252.112:52668). Apr 17 23:42:05.996368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518274360.mount: Deactivated successfully. Apr 17 23:42:06.048113 containerd[1996]: time="2026-04-17T23:42:06.048056791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:06.051505 containerd[1996]: time="2026-04-17T23:42:06.051412483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:42:06.052759 containerd[1996]: time="2026-04-17T23:42:06.052695530Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:06.058368 containerd[1996]: time="2026-04-17T23:42:06.058279272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:42:06.059649 containerd[1996]: time="2026-04-17T23:42:06.059275002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.710759466s" Apr 17 23:42:06.059649 containerd[1996]: time="2026-04-17T23:42:06.059326635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:42:06.066509 containerd[1996]: time="2026-04-17T23:42:06.066461134Z" level=info msg="CreateContainer within sandbox \"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:42:06.090150 containerd[1996]: time="2026-04-17T23:42:06.089486700Z" level=info msg="CreateContainer within sandbox \"7eaac8d4aded285e532009841766499fbfae276c96597e673b9fb24b7d899c5d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9fb0217d0370782f9d23efacd4cf595b28467e00b5993a32ec8dbddbf1af725d\"" Apr 17 23:42:06.093481 containerd[1996]: time="2026-04-17T23:42:06.091927782Z" level=info msg="StartContainer for \"9fb0217d0370782f9d23efacd4cf595b28467e00b5993a32ec8dbddbf1af725d\"" Apr 17 23:42:06.178012 systemd[1]: Started cri-containerd-9fb0217d0370782f9d23efacd4cf595b28467e00b5993a32ec8dbddbf1af725d.scope - libcontainer container 9fb0217d0370782f9d23efacd4cf595b28467e00b5993a32ec8dbddbf1af725d. Apr 17 23:42:06.234485 containerd[1996]: time="2026-04-17T23:42:06.234393878Z" level=info msg="StartContainer for \"9fb0217d0370782f9d23efacd4cf595b28467e00b5993a32ec8dbddbf1af725d\" returns successfully" Apr 17 23:42:06.525310 kubelet[3208]: I0417 23:42:06.524245 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-jfd6r" podStartSLOduration=30.469780749 podStartE2EDuration="55.523835329s" podCreationTimestamp="2026-04-17 23:41:11 +0000 UTC" firstStartedPulling="2026-04-17 23:41:38.293017889 +0000 UTC m=+48.867272327" lastFinishedPulling="2026-04-17 23:42:03.347072461 +0000 UTC m=+73.921326907" observedRunningTime="2026-04-17 23:42:04.514588035 +0000 UTC m=+75.088842488" watchObservedRunningTime="2026-04-17 23:42:06.523835329 +0000 UTC m=+77.098089782" Apr 17 23:42:06.539918 kubelet[3208]: I0417 23:42:06.537834 3208 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5669bb44bd-8wz55" podStartSLOduration=4.717314672 podStartE2EDuration="29.537808744s" podCreationTimestamp="2026-04-17 23:41:37 +0000 UTC" firstStartedPulling="2026-04-17 23:41:41.239982528 +0000 UTC m=+51.814236957" lastFinishedPulling="2026-04-17 23:42:06.060476583 +0000 UTC m=+76.634731029" observedRunningTime="2026-04-17 23:42:06.522257911 +0000 UTC m=+77.096512367" watchObservedRunningTime="2026-04-17 23:42:06.537808744 +0000 UTC m=+77.112063196" Apr 17 23:42:06.718578 sshd[6484]: Accepted publickey for core from 20.229.252.112 port 52668 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:06.722891 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:06.744868 systemd-logind[1962]: New session 10 of user core. Apr 17 23:42:06.748807 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:42:08.525405 sshd[6484]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:08.530119 systemd[1]: sshd@9-172.31.24.87:22-20.229.252.112:52668.service: Deactivated successfully. Apr 17 23:42:08.533240 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:42:08.536275 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:42:08.545868 systemd-logind[1962]: Removed session 10. Apr 17 23:42:13.708000 systemd[1]: Started sshd@10-172.31.24.87:22-20.229.252.112:52674.service - OpenSSH per-connection server daemon (20.229.252.112:52674). Apr 17 23:42:14.861012 sshd[6591]: Accepted publickey for core from 20.229.252.112 port 52674 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:14.865330 sshd[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:14.871616 systemd-logind[1962]: New session 11 of user core. Apr 17 23:42:14.875820 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:42:16.403343 sshd[6591]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:16.410424 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:42:16.412376 systemd[1]: sshd@10-172.31.24.87:22-20.229.252.112:52674.service: Deactivated successfully. Apr 17 23:42:16.417433 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:42:16.419131 systemd-logind[1962]: Removed session 11. Apr 17 23:42:16.571223 systemd[1]: Started sshd@11-172.31.24.87:22-20.229.252.112:37264.service - OpenSSH per-connection server daemon (20.229.252.112:37264). Apr 17 23:42:17.567359 sshd[6606]: Accepted publickey for core from 20.229.252.112 port 37264 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:17.569122 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:17.575488 systemd-logind[1962]: New session 12 of user core. Apr 17 23:42:17.580864 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:42:18.529751 sshd[6606]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:18.534788 systemd[1]: sshd@11-172.31.24.87:22-20.229.252.112:37264.service: Deactivated successfully. Apr 17 23:42:18.538090 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:42:18.539190 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:42:18.540512 systemd-logind[1962]: Removed session 12. Apr 17 23:42:18.706667 systemd[1]: Started sshd@12-172.31.24.87:22-20.229.252.112:37278.service - OpenSSH per-connection server daemon (20.229.252.112:37278). Apr 17 23:42:19.721354 sshd[6617]: Accepted publickey for core from 20.229.252.112 port 37278 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:19.723367 sshd[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:19.728805 systemd-logind[1962]: New session 13 of user core. Apr 17 23:42:19.735926 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:42:20.621037 sshd[6617]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:20.632840 systemd[1]: sshd@12-172.31.24.87:22-20.229.252.112:37278.service: Deactivated successfully. Apr 17 23:42:20.636360 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:42:20.638510 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:42:20.640179 systemd-logind[1962]: Removed session 13. Apr 17 23:42:20.953031 kubelet[3208]: I0417 23:42:20.947034 3208 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:42:25.808007 systemd[1]: Started sshd@13-172.31.24.87:22-20.229.252.112:60294.service - OpenSSH per-connection server daemon (20.229.252.112:60294). Apr 17 23:42:26.934625 sshd[6662]: Accepted publickey for core from 20.229.252.112 port 60294 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:26.939780 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:26.946444 systemd-logind[1962]: New session 14 of user core. Apr 17 23:42:26.953885 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:42:28.017038 sshd[6662]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:28.026920 systemd[1]: sshd@13-172.31.24.87:22-20.229.252.112:60294.service: Deactivated successfully. Apr 17 23:42:28.029417 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:42:28.031372 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:42:28.033295 systemd-logind[1962]: Removed session 14. Apr 17 23:42:28.179069 systemd[1]: Started sshd@14-172.31.24.87:22-20.229.252.112:60302.service - OpenSSH per-connection server daemon (20.229.252.112:60302). Apr 17 23:42:29.189271 sshd[6676]: Accepted publickey for core from 20.229.252.112 port 60302 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:29.192073 sshd[6676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:29.197978 systemd-logind[1962]: New session 15 of user core. Apr 17 23:42:29.206834 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:42:30.412453 sshd[6676]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:30.420876 systemd[1]: sshd@14-172.31.24.87:22-20.229.252.112:60302.service: Deactivated successfully. Apr 17 23:42:30.424568 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:42:30.425806 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:42:30.426983 systemd-logind[1962]: Removed session 15. Apr 17 23:42:30.580222 systemd[1]: Started sshd@15-172.31.24.87:22-20.229.252.112:60314.service - OpenSSH per-connection server daemon (20.229.252.112:60314). Apr 17 23:42:31.623279 sshd[6687]: Accepted publickey for core from 20.229.252.112 port 60314 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:31.625869 sshd[6687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:31.632167 systemd-logind[1962]: New session 16 of user core. Apr 17 23:42:31.637862 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:42:32.524894 systemd[1]: run-containerd-runc-k8s.io-cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79-runc.EqFRPA.mount: Deactivated successfully. Apr 17 23:42:33.221577 sshd[6687]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:33.229375 systemd[1]: sshd@15-172.31.24.87:22-20.229.252.112:60314.service: Deactivated successfully. Apr 17 23:42:33.234537 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:42:33.236905 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:42:33.238390 systemd-logind[1962]: Removed session 16. Apr 17 23:42:33.398154 systemd[1]: Started sshd@16-172.31.24.87:22-20.229.252.112:60330.service - OpenSSH per-connection server daemon (20.229.252.112:60330). Apr 17 23:42:34.461691 sshd[6740]: Accepted publickey for core from 20.229.252.112 port 60330 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:34.466856 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:34.472389 systemd-logind[1962]: New session 17 of user core. Apr 17 23:42:34.480849 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:42:36.106174 sshd[6740]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:36.110823 systemd[1]: sshd@16-172.31.24.87:22-20.229.252.112:60330.service: Deactivated successfully. Apr 17 23:42:36.114050 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:42:36.115210 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:42:36.116561 systemd-logind[1962]: Removed session 17. Apr 17 23:42:36.278065 systemd[1]: run-containerd-runc-k8s.io-57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a-runc.M9INeH.mount: Deactivated successfully. Apr 17 23:42:36.291484 systemd[1]: Started sshd@17-172.31.24.87:22-20.229.252.112:53762.service - OpenSSH per-connection server daemon (20.229.252.112:53762). Apr 17 23:42:37.391634 sshd[6769]: Accepted publickey for core from 20.229.252.112 port 53762 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:37.394760 sshd[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:37.403498 systemd-logind[1962]: New session 18 of user core. Apr 17 23:42:37.409875 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:42:37.780627 systemd[1]: run-containerd-runc-k8s.io-cbc3dccc38385ea2cf03e6dac24baba01a68a28820940a6294e01ec258353c79-runc.9mpxz3.mount: Deactivated successfully. Apr 17 23:42:38.736700 sshd[6769]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:38.741544 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:42:38.742226 systemd[1]: sshd@17-172.31.24.87:22-20.229.252.112:53762.service: Deactivated successfully. Apr 17 23:42:38.745016 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:42:38.746204 systemd-logind[1962]: Removed session 18. Apr 17 23:42:43.915864 systemd[1]: Started sshd@18-172.31.24.87:22-20.229.252.112:53778.service - OpenSSH per-connection server daemon (20.229.252.112:53778). Apr 17 23:42:44.968009 sshd[6810]: Accepted publickey for core from 20.229.252.112 port 53778 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:44.970326 sshd[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:44.975656 systemd-logind[1962]: New session 19 of user core. Apr 17 23:42:44.983835 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:42:45.781182 sshd[6810]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:45.785713 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:42:45.786161 systemd[1]: sshd@18-172.31.24.87:22-20.229.252.112:53778.service: Deactivated successfully. Apr 17 23:42:45.789227 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:42:45.790497 systemd-logind[1962]: Removed session 19. Apr 17 23:42:50.951014 systemd[1]: Started sshd@19-172.31.24.87:22-20.229.252.112:53250.service - OpenSSH per-connection server daemon (20.229.252.112:53250). Apr 17 23:42:52.048413 sshd[6826]: Accepted publickey for core from 20.229.252.112 port 53250 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:52.052479 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:52.058051 systemd-logind[1962]: New session 20 of user core. Apr 17 23:42:52.064844 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:42:53.288286 sshd[6826]: pam_unix(sshd:session): session closed for user core Apr 17 23:42:53.292087 systemd[1]: sshd@19-172.31.24.87:22-20.229.252.112:53250.service: Deactivated successfully. Apr 17 23:42:53.295486 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:42:53.297225 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:42:53.299148 systemd-logind[1962]: Removed session 20. Apr 17 23:42:58.478909 systemd[1]: Started sshd@20-172.31.24.87:22-20.229.252.112:50124.service - OpenSSH per-connection server daemon (20.229.252.112:50124). Apr 17 23:42:59.510978 sshd[6881]: Accepted publickey for core from 20.229.252.112 port 50124 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:42:59.513376 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:42:59.519183 systemd-logind[1962]: New session 21 of user core. Apr 17 23:42:59.523794 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:43:00.636529 sshd[6881]: pam_unix(sshd:session): session closed for user core Apr 17 23:43:00.640833 systemd[1]: sshd@20-172.31.24.87:22-20.229.252.112:50124.service: Deactivated successfully. Apr 17 23:43:00.643768 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:43:00.645515 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:43:00.647728 systemd-logind[1962]: Removed session 21. Apr 17 23:43:05.822222 systemd[1]: Started sshd@21-172.31.24.87:22-20.229.252.112:37262.service - OpenSSH per-connection server daemon (20.229.252.112:37262). Apr 17 23:43:06.276915 systemd[1]: run-containerd-runc-k8s.io-57e0a539618ecc57636766ec43fce6e59551a3959233334e0466dcd59804ff7a-runc.cAVRiQ.mount: Deactivated successfully. Apr 17 23:43:06.887448 sshd[6926]: Accepted publickey for core from 20.229.252.112 port 37262 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:43:06.893891 sshd[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:43:06.902153 systemd-logind[1962]: New session 22 of user core. Apr 17 23:43:06.909819 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:43:08.362373 sshd[6926]: pam_unix(sshd:session): session closed for user core Apr 17 23:43:08.369957 systemd[1]: sshd@21-172.31.24.87:22-20.229.252.112:37262.service: Deactivated successfully. Apr 17 23:43:08.373506 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:43:08.374570 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:43:08.375995 systemd-logind[1962]: Removed session 22.