Apr 13 20:17:07.933578 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:17:07.933615 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:17:07.933635 kernel: BIOS-provided physical RAM map: Apr 13 20:17:07.933647 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:17:07.933658 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 13 20:17:07.933669 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 13 20:17:07.933681 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 13 20:17:07.933692 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 13 20:17:07.933703 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 13 20:17:07.933717 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 13 20:17:07.933728 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 13 20:17:07.933739 kernel: NX (Execute Disable) protection: active Apr 13 20:17:07.933750 kernel: APIC: Static calls initialized Apr 13 20:17:07.933763 kernel: efi: EFI v2.7 by EDK II Apr 13 20:17:07.933779 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 13 20:17:07.933795 kernel: SMBIOS 2.7 present. Apr 13 20:17:07.933808 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 13 20:17:07.933822 kernel: Hypervisor detected: KVM Apr 13 20:17:07.933835 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:17:07.933848 kernel: kvm-clock: using sched offset of 3953547591 cycles Apr 13 20:17:07.933862 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:17:07.933877 kernel: tsc: Detected 2499.998 MHz processor Apr 13 20:17:07.933892 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:17:07.933906 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:17:07.934004 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 13 20:17:07.934024 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:17:07.934051 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:17:07.934066 kernel: Using GB pages for direct mapping Apr 13 20:17:07.934081 kernel: Secure boot disabled Apr 13 20:17:07.934095 kernel: ACPI: Early table checksum verification disabled Apr 13 20:17:07.934108 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 13 20:17:07.934123 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 20:17:07.934138 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 20:17:07.934153 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 20:17:07.934172 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 13 20:17:07.934187 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 13 20:17:07.934202 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 20:17:07.934217 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 20:17:07.934233 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 13 20:17:07.934247 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 13 20:17:07.934269 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:17:07.934287 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:17:07.934300 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 13 20:17:07.934312 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 13 20:17:07.934326 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 13 20:17:07.934341 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 13 20:17:07.934354 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 13 20:17:07.934368 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 13 20:17:07.934382 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 13 20:17:07.934393 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 13 20:17:07.934406 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 13 20:17:07.934420 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 13 20:17:07.934434 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 13 20:17:07.934447 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 13 20:17:07.934460 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:17:07.934474 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:17:07.934488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 13 20:17:07.934504 kernel: NUMA: Initialized distance table, cnt=1 Apr 13 20:17:07.934518 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 13 20:17:07.934533 kernel: Zone ranges: Apr 13 20:17:07.934550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:17:07.934566 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 13 20:17:07.934582 kernel: Normal empty Apr 13 20:17:07.934597 kernel: Movable zone start for each node Apr 13 20:17:07.934612 kernel: Early memory node ranges Apr 13 20:17:07.934628 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:17:07.934644 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 13 20:17:07.934661 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 13 20:17:07.934675 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 13 20:17:07.934688 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:17:07.934699 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:17:07.934714 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 20:17:07.934731 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 13 20:17:07.934746 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:17:07.934760 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:17:07.934775 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 13 20:17:07.934795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:17:07.934812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:17:07.934829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:17:07.934845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:17:07.934862 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:17:07.934879 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:17:07.934896 kernel: TSC deadline timer available Apr 13 20:17:07.934912 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:17:07.934928 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:17:07.934948 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 13 20:17:07.934963 kernel: Booting paravirtualized kernel on KVM Apr 13 20:17:07.934978 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:17:07.934994 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:17:07.935010 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:17:07.935023 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:17:07.935063 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:17:07.935077 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:17:07.935091 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:17:07.935111 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:17:07.935125 kernel: random: crng init done Apr 13 20:17:07.935137 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:17:07.935151 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:17:07.935166 kernel: Fallback order for Node 0: 0 Apr 13 20:17:07.935180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 13 20:17:07.935197 kernel: Policy zone: DMA32 Apr 13 20:17:07.935211 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:17:07.935229 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 13 20:17:07.935242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:17:07.935255 kernel: Kernel/User page tables isolation: enabled Apr 13 20:17:07.935270 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:17:07.935284 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:17:07.935297 kernel: Dynamic Preempt: voluntary Apr 13 20:17:07.935312 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:17:07.935327 kernel: rcu: RCU event tracing is enabled. Apr 13 20:17:07.935340 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:17:07.935357 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:17:07.935371 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:17:07.935384 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:17:07.935398 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:17:07.935412 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:17:07.935426 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:17:07.935441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:17:07.935470 kernel: Console: colour dummy device 80x25 Apr 13 20:17:07.935485 kernel: printk: console [tty0] enabled Apr 13 20:17:07.935500 kernel: printk: console [ttyS0] enabled Apr 13 20:17:07.935515 kernel: ACPI: Core revision 20230628 Apr 13 20:17:07.935530 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 13 20:17:07.935547 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:17:07.935562 kernel: x2apic enabled Apr 13 20:17:07.935576 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:17:07.935591 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 13 20:17:07.935607 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 13 20:17:07.935627 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 13 20:17:07.935644 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 13 20:17:07.935658 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:17:07.935672 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:17:07.935687 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:17:07.935702 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 20:17:07.935718 kernel: RETBleed: Vulnerable Apr 13 20:17:07.935732 kernel: Speculative Store Bypass: Vulnerable Apr 13 20:17:07.935748 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:17:07.935763 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:17:07.935781 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 20:17:07.935796 kernel: active return thunk: its_return_thunk Apr 13 20:17:07.935811 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:17:07.935826 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:17:07.935842 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:17:07.935858 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:17:07.935873 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 13 20:17:07.935888 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 13 20:17:07.935904 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:17:07.935921 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:17:07.935938 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:17:07.935959 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:17:07.935976 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:17:07.935993 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 13 20:17:07.936011 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 13 20:17:07.936028 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 13 20:17:07.936433 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 13 20:17:07.936450 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 13 20:17:07.936465 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 13 20:17:07.936481 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 13 20:17:07.936496 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:17:07.936512 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:17:07.936527 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:17:07.936550 kernel: landlock: Up and running. Apr 13 20:17:07.936565 kernel: SELinux: Initializing. Apr 13 20:17:07.936580 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:17:07.936596 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:17:07.936614 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 13 20:17:07.936631 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:17:07.936649 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:17:07.936667 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:17:07.936685 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 13 20:17:07.936706 kernel: signal: max sigframe size: 3632 Apr 13 20:17:07.936721 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:17:07.936736 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:17:07.936751 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:17:07.936766 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:17:07.936781 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:17:07.936795 kernel: .... node #0, CPUs: #1 Apr 13 20:17:07.936812 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:17:07.936828 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:17:07.936846 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:17:07.936861 kernel: smpboot: Max logical packages: 1 Apr 13 20:17:07.936876 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 13 20:17:07.936891 kernel: devtmpfs: initialized Apr 13 20:17:07.936905 kernel: x86/mm: Memory block size: 128MB Apr 13 20:17:07.936920 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 13 20:17:07.936936 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:17:07.936951 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:17:07.936967 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:17:07.936985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:17:07.937000 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:17:07.937015 kernel: audit: type=2000 audit(1776111427.461:1): state=initialized audit_enabled=0 res=1 Apr 13 20:17:07.937031 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:17:07.937533 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:17:07.937553 kernel: cpuidle: using governor menu Apr 13 20:17:07.937570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:17:07.937586 kernel: dca service started, version 1.12.1 Apr 13 20:17:07.937601 kernel: PCI: Using configuration type 1 for base access Apr 13 20:17:07.937623 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:17:07.937639 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:17:07.937655 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:17:07.937671 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:17:07.937686 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:17:07.937702 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:17:07.937718 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:17:07.937734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:17:07.937750 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:17:07.937769 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:17:07.937785 kernel: ACPI: Interpreter enabled Apr 13 20:17:07.937801 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:17:07.937817 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:17:07.937832 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:17:07.937845 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:17:07.937857 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 13 20:17:07.937871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:17:07.938194 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:17:07.938351 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:17:07.938485 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:17:07.938503 kernel: acpiphp: Slot [3] registered Apr 13 20:17:07.938519 kernel: acpiphp: Slot [4] registered Apr 13 20:17:07.938533 kernel: acpiphp: Slot [5] registered Apr 13 20:17:07.938547 kernel: acpiphp: Slot [6] registered Apr 13 20:17:07.938562 kernel: acpiphp: Slot [7] registered Apr 13 20:17:07.938579 kernel: acpiphp: Slot [8] registered Apr 13 20:17:07.938593 kernel: acpiphp: Slot [9] registered Apr 13 20:17:07.938607 kernel: acpiphp: Slot [10] registered Apr 13 20:17:07.938622 kernel: acpiphp: Slot [11] registered Apr 13 20:17:07.938637 kernel: acpiphp: Slot [12] registered Apr 13 20:17:07.938651 kernel: acpiphp: Slot [13] registered Apr 13 20:17:07.938665 kernel: acpiphp: Slot [14] registered Apr 13 20:17:07.938680 kernel: acpiphp: Slot [15] registered Apr 13 20:17:07.938693 kernel: acpiphp: Slot [16] registered Apr 13 20:17:07.938708 kernel: acpiphp: Slot [17] registered Apr 13 20:17:07.938725 kernel: acpiphp: Slot [18] registered Apr 13 20:17:07.938739 kernel: acpiphp: Slot [19] registered Apr 13 20:17:07.938753 kernel: acpiphp: Slot [20] registered Apr 13 20:17:07.938767 kernel: acpiphp: Slot [21] registered Apr 13 20:17:07.938781 kernel: acpiphp: Slot [22] registered Apr 13 20:17:07.938796 kernel: acpiphp: Slot [23] registered Apr 13 20:17:07.938810 kernel: acpiphp: Slot [24] registered Apr 13 20:17:07.938825 kernel: acpiphp: Slot [25] registered Apr 13 20:17:07.938839 kernel: acpiphp: Slot [26] registered Apr 13 20:17:07.938857 kernel: acpiphp: Slot [27] registered Apr 13 20:17:07.938872 kernel: acpiphp: Slot [28] registered Apr 13 20:17:07.938886 kernel: acpiphp: Slot [29] registered Apr 13 20:17:07.938900 kernel: acpiphp: Slot [30] registered Apr 13 20:17:07.938914 kernel: acpiphp: Slot [31] registered Apr 13 20:17:07.938928 kernel: PCI host bridge to bus 0000:00 Apr 13 20:17:07.939073 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:17:07.939197 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:17:07.939321 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:17:07.939439 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 13 20:17:07.939603 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 13 20:17:07.939718 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:17:07.939875 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:17:07.940017 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 13 20:17:07.940212 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 13 20:17:07.940356 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:17:07.940492 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 13 20:17:07.940636 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 13 20:17:07.940776 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 13 20:17:07.940914 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 13 20:17:07.941093 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 13 20:17:07.941245 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 13 20:17:07.942166 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 13 20:17:07.942345 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 13 20:17:07.942502 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:17:07.942660 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 13 20:17:07.942807 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:17:07.942957 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 20:17:07.944155 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 13 20:17:07.944324 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 20:17:07.944471 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 13 20:17:07.944495 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:17:07.944513 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:17:07.944529 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:17:07.944544 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:17:07.944560 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:17:07.944582 kernel: iommu: Default domain type: Translated Apr 13 20:17:07.944598 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:17:07.944614 kernel: efivars: Registered efivars operations Apr 13 20:17:07.944630 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:17:07.944646 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:17:07.944663 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 13 20:17:07.944678 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 13 20:17:07.944833 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 13 20:17:07.944978 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 13 20:17:07.945152 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:17:07.945171 kernel: vgaarb: loaded Apr 13 20:17:07.945186 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 13 20:17:07.945200 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 13 20:17:07.945215 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:17:07.945227 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:17:07.945241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:17:07.945255 kernel: pnp: PnP ACPI init Apr 13 20:17:07.945269 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:17:07.945289 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:17:07.945303 kernel: NET: Registered PF_INET protocol family Apr 13 20:17:07.945317 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:17:07.945332 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 13 20:17:07.945346 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:17:07.945360 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:17:07.945375 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 13 20:17:07.945389 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 13 20:17:07.945405 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:17:07.945418 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:17:07.945432 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:17:07.945446 kernel: NET: Registered PF_XDP protocol family Apr 13 20:17:07.945573 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:17:07.945692 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:17:07.945811 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:17:07.946015 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 13 20:17:07.948229 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 13 20:17:07.948400 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:17:07.948424 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:17:07.948443 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:17:07.948460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 13 20:17:07.948478 kernel: clocksource: Switched to clocksource tsc Apr 13 20:17:07.948495 kernel: Initialise system trusted keyrings Apr 13 20:17:07.948513 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 13 20:17:07.948530 kernel: Key type asymmetric registered Apr 13 20:17:07.948551 kernel: Asymmetric key parser 'x509' registered Apr 13 20:17:07.948568 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:17:07.948586 kernel: io scheduler mq-deadline registered Apr 13 20:17:07.948603 kernel: io scheduler kyber registered Apr 13 20:17:07.948619 kernel: io scheduler bfq registered Apr 13 20:17:07.948636 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:17:07.948652 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:17:07.948670 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:17:07.948687 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:17:07.948708 kernel: i8042: Warning: Keylock active Apr 13 20:17:07.948725 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:17:07.948741 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:17:07.948892 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:17:07.949030 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:17:07.949186 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:17:07 UTC (1776111427) Apr 13 20:17:07.949319 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:17:07.949340 kernel: intel_pstate: CPU model not supported Apr 13 20:17:07.949361 kernel: efifb: probing for efifb Apr 13 20:17:07.949378 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 13 20:17:07.949395 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 13 20:17:07.949411 kernel: efifb: scrolling: redraw Apr 13 20:17:07.949427 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:17:07.949443 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 20:17:07.949459 kernel: fb0: EFI VGA frame buffer device Apr 13 20:17:07.949475 kernel: pstore: Using crash dump compression: deflate Apr 13 20:17:07.949490 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:17:07.949510 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:17:07.949526 kernel: Segment Routing with IPv6 Apr 13 20:17:07.949542 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:17:07.949557 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:17:07.949574 kernel: Key type dns_resolver registered Apr 13 20:17:07.949590 kernel: IPI shorthand broadcast: enabled Apr 13 20:17:07.949634 kernel: sched_clock: Marking stable (479002953, 128342633)->(674667417, -67321831) Apr 13 20:17:07.949654 kernel: registered taskstats version 1 Apr 13 20:17:07.949672 kernel: Loading compiled-in X.509 certificates Apr 13 20:17:07.949691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:17:07.949708 kernel: Key type .fscrypt registered Apr 13 20:17:07.949725 kernel: Key type fscrypt-provisioning registered Apr 13 20:17:07.949740 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:17:07.949755 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:17:07.949771 kernel: ima: No architecture policies found Apr 13 20:17:07.949787 kernel: clk: Disabling unused clocks Apr 13 20:17:07.949804 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:17:07.949821 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:17:07.949842 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:17:07.949858 kernel: Run /init as init process Apr 13 20:17:07.949875 kernel: with arguments: Apr 13 20:17:07.949893 kernel: /init Apr 13 20:17:07.949921 kernel: with environment: Apr 13 20:17:07.949938 kernel: HOME=/ Apr 13 20:17:07.949956 kernel: TERM=linux Apr 13 20:17:07.949978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:17:07.950003 systemd[1]: Detected virtualization amazon. Apr 13 20:17:07.950021 systemd[1]: Detected architecture x86-64. Apr 13 20:17:07.951403 systemd[1]: Running in initrd. Apr 13 20:17:07.951426 systemd[1]: No hostname configured, using default hostname. Apr 13 20:17:07.951445 systemd[1]: Hostname set to . Apr 13 20:17:07.951468 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:17:07.951485 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:17:07.951503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:17:07.951525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:17:07.951544 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:17:07.951561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:17:07.951578 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:17:07.951599 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:17:07.951624 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:17:07.951641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:17:07.951656 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:17:07.951671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:17:07.951687 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:17:07.951703 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:17:07.951718 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:17:07.951737 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:17:07.951754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:17:07.951769 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:17:07.951786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:17:07.951802 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:17:07.951819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:17:07.951835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:17:07.951851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:17:07.951867 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:17:07.951886 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:17:07.951902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:17:07.951918 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:17:07.951935 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:17:07.951951 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:17:07.951967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:17:07.951984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:17:07.952049 systemd-journald[179]: Collecting audit messages is disabled. Apr 13 20:17:07.952093 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:17:07.952110 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:17:07.952126 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:17:07.952148 systemd-journald[179]: Journal started Apr 13 20:17:07.952182 systemd-journald[179]: Runtime Journal (/run/log/journal/ec28db66bcef93c0c011ab126bd22bd3) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:17:07.957123 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:17:07.967299 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:17:07.971739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:17:07.975468 systemd-modules-load[180]: Inserted module 'overlay' Apr 13 20:17:07.976449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:07.990486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:17:07.994460 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:17:08.007225 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:17:08.017468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:17:08.018474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:17:08.035064 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:17:08.035760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:17:08.039763 kernel: Bridge firewalling registered Apr 13 20:17:08.038351 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 13 20:17:08.040163 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:17:08.041004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:17:08.046234 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:17:08.051241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:17:08.064544 dracut-cmdline[210]: dracut-dracut-053 Apr 13 20:17:08.069143 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:17:08.071235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:17:08.081340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:17:08.126851 systemd-resolved[228]: Positive Trust Anchors: Apr 13 20:17:08.127846 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:17:08.127914 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:17:08.135211 systemd-resolved[228]: Defaulting to hostname 'linux'. Apr 13 20:17:08.139348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:17:08.140061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:17:08.165078 kernel: SCSI subsystem initialized Apr 13 20:17:08.176070 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:17:08.187064 kernel: iscsi: registered transport (tcp) Apr 13 20:17:08.208188 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:17:08.208270 kernel: QLogic iSCSI HBA Driver Apr 13 20:17:08.248131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:17:08.254261 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:17:08.280132 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:17:08.280208 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:17:08.281274 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:17:08.324066 kernel: raid6: avx512x4 gen() 15388 MB/s Apr 13 20:17:08.342079 kernel: raid6: avx512x2 gen() 15298 MB/s Apr 13 20:17:08.360061 kernel: raid6: avx512x1 gen() 15416 MB/s Apr 13 20:17:08.378063 kernel: raid6: avx2x4 gen() 15862 MB/s Apr 13 20:17:08.396060 kernel: raid6: avx2x2 gen() 16315 MB/s Apr 13 20:17:08.414261 kernel: raid6: avx2x1 gen() 12662 MB/s Apr 13 20:17:08.414310 kernel: raid6: using algorithm avx2x2 gen() 16315 MB/s Apr 13 20:17:08.433255 kernel: raid6: .... xor() 17554 MB/s, rmw enabled Apr 13 20:17:08.433315 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:17:08.455083 kernel: xor: automatically using best checksumming function avx Apr 13 20:17:08.615074 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:17:08.625703 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:17:08.631320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:17:08.646465 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 13 20:17:08.651509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:17:08.662313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:17:08.682668 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 13 20:17:08.713695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:17:08.723312 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:17:08.774660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:17:08.784505 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:17:08.809929 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:17:08.814337 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:17:08.814989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:17:08.816136 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:17:08.824301 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:17:08.846950 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:17:08.879015 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 20:17:08.879320 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 20:17:08.886066 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:17:08.894079 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 13 20:17:08.902087 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:96:df:21:5b:d3 Apr 13 20:17:08.913646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:17:08.914858 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:17:08.916815 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:17:08.919444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:17:08.927962 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 20:17:08.929121 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 13 20:17:08.919731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:08.920383 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:17:08.928442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:17:08.938133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:17:08.938924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:08.946115 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 20:17:08.947675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:17:08.964650 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:17:08.964695 kernel: AES CTR mode by8 optimization enabled Apr 13 20:17:08.964716 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:17:08.964738 kernel: GPT:9289727 != 33554431 Apr 13 20:17:08.964758 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:17:08.964777 kernel: GPT:9289727 != 33554431 Apr 13 20:17:08.964795 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:17:08.964814 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:17:08.968536 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:17:08.992309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:08.996260 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:17:09.027779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:17:09.058058 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Apr 13 20:17:09.077061 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (455) Apr 13 20:17:09.137856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 20:17:09.154462 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 20:17:09.165142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 20:17:09.165730 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 20:17:09.173212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:17:09.179196 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:17:09.186297 disk-uuid[631]: Primary Header is updated. Apr 13 20:17:09.186297 disk-uuid[631]: Secondary Entries is updated. Apr 13 20:17:09.186297 disk-uuid[631]: Secondary Header is updated. Apr 13 20:17:09.194110 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:17:09.200318 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:17:10.212694 disk-uuid[632]: The operation has completed successfully. Apr 13 20:17:10.213675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:17:10.362164 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:17:10.362293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:17:10.378275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:17:10.384061 sh[975]: Success Apr 13 20:17:10.407081 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:17:10.510432 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:17:10.517177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:17:10.522146 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:17:10.563952 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:17:10.564028 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:17:10.564064 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:17:10.567386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:17:10.567451 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:17:10.665118 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:17:10.679661 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:17:10.680944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:17:10.692331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:17:10.696242 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:17:10.723069 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:17:10.726764 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:17:10.726837 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:17:10.745062 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:17:10.758317 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:17:10.761459 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:17:10.769347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:17:10.778329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:17:10.811032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:17:10.817225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:17:10.841110 systemd-networkd[1167]: lo: Link UP Apr 13 20:17:10.841121 systemd-networkd[1167]: lo: Gained carrier Apr 13 20:17:10.843149 systemd-networkd[1167]: Enumeration completed Apr 13 20:17:10.843282 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:17:10.843901 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:17:10.843906 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:17:10.845545 systemd[1]: Reached target network.target - Network. Apr 13 20:17:10.846792 systemd-networkd[1167]: eth0: Link UP Apr 13 20:17:10.846798 systemd-networkd[1167]: eth0: Gained carrier Apr 13 20:17:10.846810 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:17:10.861217 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.17.102/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:17:11.134727 ignition[1120]: Ignition 2.19.0 Apr 13 20:17:11.134741 ignition[1120]: Stage: fetch-offline Apr 13 20:17:11.134998 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:11.135013 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:11.135454 ignition[1120]: Ignition finished successfully Apr 13 20:17:11.137827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:17:11.143260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:17:11.159299 ignition[1176]: Ignition 2.19.0 Apr 13 20:17:11.159313 ignition[1176]: Stage: fetch Apr 13 20:17:11.159796 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:11.159811 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:11.159932 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:11.192811 ignition[1176]: PUT result: OK Apr 13 20:17:11.201402 ignition[1176]: parsed url from cmdline: "" Apr 13 20:17:11.201425 ignition[1176]: no config URL provided Apr 13 20:17:11.201438 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:17:11.201456 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:17:11.201484 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:11.203888 ignition[1176]: PUT result: OK Apr 13 20:17:11.203990 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 20:17:11.206871 ignition[1176]: GET result: OK Apr 13 20:17:11.207002 ignition[1176]: parsing config with SHA512: 1ab154994e6ae1e62baaf94be846742b120e4f869f0f9128bd8ce889fd5d8547a4e20a58ca0e260a3fbd52419c674d9a9d4dcdf8c4981daf3c5d6fe70397021c Apr 13 20:17:11.211229 unknown[1176]: fetched base config from "system" Apr 13 20:17:11.211771 unknown[1176]: fetched base config from "system" Apr 13 20:17:11.212547 ignition[1176]: fetch: fetch complete Apr 13 20:17:11.211778 unknown[1176]: fetched user config from "aws" Apr 13 20:17:11.212555 ignition[1176]: fetch: fetch passed Apr 13 20:17:11.212627 ignition[1176]: Ignition finished successfully Apr 13 20:17:11.215194 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:17:11.220306 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:17:11.238011 ignition[1182]: Ignition 2.19.0 Apr 13 20:17:11.238026 ignition[1182]: Stage: kargs Apr 13 20:17:11.238555 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:11.238570 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:11.238687 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:11.239573 ignition[1182]: PUT result: OK Apr 13 20:17:11.243207 ignition[1182]: kargs: kargs passed Apr 13 20:17:11.243270 ignition[1182]: Ignition finished successfully Apr 13 20:17:11.244940 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:17:11.250335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:17:11.266735 ignition[1188]: Ignition 2.19.0 Apr 13 20:17:11.266749 ignition[1188]: Stage: disks Apr 13 20:17:11.267245 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:11.267260 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:11.267388 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:11.269428 ignition[1188]: PUT result: OK Apr 13 20:17:11.273317 ignition[1188]: disks: disks passed Apr 13 20:17:11.273407 ignition[1188]: Ignition finished successfully Apr 13 20:17:11.275279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:17:11.275821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:17:11.276259 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:17:11.276834 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:17:11.277433 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:17:11.278236 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:17:11.289366 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:17:11.315961 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:17:11.319398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:17:11.325176 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:17:11.434061 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:17:11.434618 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:17:11.435816 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:17:11.454235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:17:11.458257 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:17:11.460319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:17:11.461543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:17:11.461581 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:17:11.467459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:17:11.469027 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:17:11.481068 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 13 20:17:11.486272 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:17:11.486357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:17:11.486381 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:17:11.501069 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:17:11.503118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:17:11.712395 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:17:11.718694 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:17:11.724018 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:17:11.728692 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:17:11.922259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:17:11.926195 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:17:11.929225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:17:11.941708 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:17:11.944081 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:17:11.972523 ignition[1334]: INFO : Ignition 2.19.0 Apr 13 20:17:11.975162 ignition[1334]: INFO : Stage: mount Apr 13 20:17:11.975162 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:11.975162 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:11.975162 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:11.979078 ignition[1334]: INFO : PUT result: OK Apr 13 20:17:11.980572 ignition[1334]: INFO : mount: mount passed Apr 13 20:17:11.982140 ignition[1334]: INFO : Ignition finished successfully Apr 13 20:17:11.983541 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:17:11.984198 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:17:11.990159 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:17:12.003274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:17:12.025065 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Apr 13 20:17:12.025132 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:17:12.028867 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:17:12.028935 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:17:12.037061 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:17:12.039020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:17:12.065845 ignition[1362]: INFO : Ignition 2.19.0 Apr 13 20:17:12.065845 ignition[1362]: INFO : Stage: files Apr 13 20:17:12.067420 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:12.067420 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:12.067420 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:12.067420 ignition[1362]: INFO : PUT result: OK Apr 13 20:17:12.069871 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:17:12.071179 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:17:12.071179 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:17:12.100084 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:17:12.101228 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:17:12.101228 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:17:12.100647 unknown[1362]: wrote ssh authorized keys file for user: core Apr 13 20:17:12.104252 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:17:12.105148 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:17:12.191557 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:17:12.379282 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:17:12.379282 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:17:12.382644 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:17:12.738319 systemd-networkd[1167]: eth0: Gained IPv6LL Apr 13 20:17:12.844865 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:17:13.270287 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:17:13.270287 ignition[1362]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:17:13.272927 ignition[1362]: INFO : files: files passed Apr 13 20:17:13.272927 ignition[1362]: INFO : Ignition finished successfully Apr 13 20:17:13.274808 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:17:13.283334 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:17:13.286561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:17:13.292289 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:17:13.292395 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:17:13.315448 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:17:13.317542 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:17:13.318742 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:17:13.321352 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:17:13.322270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:17:13.329257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:17:13.364328 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:17:13.364463 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:17:13.365705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:17:13.366896 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:17:13.367748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:17:13.375273 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:17:13.388642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:17:13.399312 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:17:13.411136 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:17:13.411830 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:17:13.412874 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:17:13.413746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:17:13.414019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:17:13.415278 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:17:13.416133 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:17:13.416926 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:17:13.417708 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:17:13.418573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:17:13.419364 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:17:13.420136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:17:13.420903 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:17:13.422070 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:17:13.422861 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:17:13.423584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:17:13.423763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:17:13.424875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:17:13.425681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:17:13.426497 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:17:13.426658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:17:13.427309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:17:13.427481 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:17:13.428836 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:17:13.429020 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:17:13.429739 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:17:13.430026 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:17:13.438809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:17:13.442208 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:17:13.442808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:17:13.443096 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:17:13.446411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:17:13.446642 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:17:13.460096 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:17:13.460235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:17:13.465864 ignition[1415]: INFO : Ignition 2.19.0 Apr 13 20:17:13.465864 ignition[1415]: INFO : Stage: umount Apr 13 20:17:13.467540 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:17:13.467540 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:17:13.467540 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:17:13.470857 ignition[1415]: INFO : PUT result: OK Apr 13 20:17:13.476797 ignition[1415]: INFO : umount: umount passed Apr 13 20:17:13.476797 ignition[1415]: INFO : Ignition finished successfully Apr 13 20:17:13.478592 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:17:13.478759 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:17:13.479970 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:17:13.481756 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:17:13.482511 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:17:13.482603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:17:13.484431 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:17:13.484498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:17:13.484998 systemd[1]: Stopped target network.target - Network. Apr 13 20:17:13.485452 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:17:13.485516 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:17:13.487175 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:17:13.487755 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:17:13.491142 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:17:13.491536 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:17:13.491861 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:17:13.492258 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:17:13.492324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:17:13.493244 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:17:13.493306 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:17:13.493853 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:17:13.494099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:17:13.494621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:17:13.494681 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:17:13.495443 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:17:13.496187 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:17:13.498480 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:17:13.501267 systemd-networkd[1167]: eth0: DHCPv6 lease lost Apr 13 20:17:13.503300 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:17:13.503450 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:17:13.505984 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:17:13.506346 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:17:13.508229 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:17:13.508293 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:17:13.515850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:17:13.516435 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:17:13.516541 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:17:13.517248 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:17:13.517306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:17:13.518281 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:17:13.518342 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:17:13.518979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:17:13.519050 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:17:13.520207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:17:13.533718 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:17:13.533876 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:17:13.539519 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:17:13.539744 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:17:13.541401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:17:13.541499 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:17:13.542319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:17:13.542366 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:17:13.542896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:17:13.542958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:17:13.543654 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:17:13.543711 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:17:13.545843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:17:13.545915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:17:13.553256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:17:13.554838 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:17:13.554930 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:17:13.555634 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:17:13.555699 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:17:13.558174 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:17:13.558237 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:17:13.558786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:17:13.558845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:13.562440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:17:13.562552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:17:13.631026 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:17:13.631200 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:17:13.632862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:17:13.633407 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:17:13.633499 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:17:13.649382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:17:13.657577 systemd[1]: Switching root. Apr 13 20:17:13.693983 systemd-journald[179]: Journal stopped Apr 13 20:17:15.315343 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 13 20:17:15.315444 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:17:15.315471 kernel: SELinux: policy capability open_perms=1 Apr 13 20:17:15.315500 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:17:15.315521 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:17:15.315542 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:17:15.315563 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:17:15.315587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:17:15.315609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:17:15.315631 kernel: audit: type=1403 audit(1776111433.995:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:17:15.315654 systemd[1]: Successfully loaded SELinux policy in 53.078ms. Apr 13 20:17:15.315695 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.034ms. Apr 13 20:17:15.315722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:17:15.315745 systemd[1]: Detected virtualization amazon. Apr 13 20:17:15.315767 systemd[1]: Detected architecture x86-64. Apr 13 20:17:15.315789 systemd[1]: Detected first boot. Apr 13 20:17:15.315808 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:17:15.315827 zram_generator::config[1457]: No configuration found. Apr 13 20:17:15.315849 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:17:15.315875 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:17:15.315898 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:17:15.315919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:17:15.315940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:17:15.315961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:17:15.315982 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:17:15.316002 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:17:15.316024 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:17:15.316065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:17:15.316085 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:17:15.316109 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:17:15.316131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:17:15.316152 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:17:15.316175 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:17:15.316195 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:17:15.316222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:17:15.316244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:17:15.316265 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:17:15.316289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:17:15.316310 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:17:15.316331 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:17:15.316352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:17:15.316374 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:17:15.316395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:17:15.316420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:17:15.316441 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:17:15.316464 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:17:15.316485 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:17:15.316506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:17:15.316527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:17:15.316548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:17:15.316568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:17:15.316589 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:17:15.316610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:17:15.316633 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:17:15.316657 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:17:15.316675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:17:15.316693 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:17:15.316711 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:17:15.316729 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:17:15.316749 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:17:15.316768 systemd[1]: Reached target machines.target - Containers. Apr 13 20:17:15.316787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:17:15.316806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:17:15.316830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:17:15.316849 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:17:15.316870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:17:15.316892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:17:15.316912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:17:15.316932 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:17:15.316951 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:17:15.316974 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:17:15.317000 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:17:15.317022 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:17:15.317076 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:17:15.317096 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:17:15.317113 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:17:15.317132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:17:15.317150 kernel: fuse: init (API version 7.39) Apr 13 20:17:15.317171 kernel: loop: module loaded Apr 13 20:17:15.317192 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:17:15.317218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:17:15.317239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:17:15.317261 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:17:15.317282 systemd[1]: Stopped verity-setup.service. Apr 13 20:17:15.317305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:17:15.317328 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:17:15.317353 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:17:15.317374 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:17:15.317393 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:17:15.317417 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:17:15.317473 systemd-journald[1539]: Collecting audit messages is disabled. Apr 13 20:17:15.317520 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:17:15.317542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:17:15.317564 systemd-journald[1539]: Journal started Apr 13 20:17:15.317602 systemd-journald[1539]: Runtime Journal (/run/log/journal/ec28db66bcef93c0c011ab126bd22bd3) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:17:14.905477 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:17:14.964616 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 20:17:14.965061 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:17:15.320075 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:17:15.323105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:17:15.336569 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:17:15.336653 kernel: ACPI: bus type drm_connector registered Apr 13 20:17:15.331234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:17:15.331435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:17:15.332489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:17:15.332716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:17:15.333757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:17:15.334529 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:17:15.336542 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:17:15.336914 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:17:15.339046 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:17:15.339266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:17:15.341447 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:17:15.342441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:17:15.343534 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:17:15.364611 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:17:15.369805 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:17:15.379519 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:17:15.386118 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:17:15.388170 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:17:15.388236 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:17:15.393014 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:17:15.403688 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:17:15.413277 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:17:15.414271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:17:15.423252 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:17:15.430295 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:17:15.430978 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:17:15.434347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:17:15.436167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:17:15.443965 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:17:15.451242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:17:15.462875 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:17:15.469079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:17:15.469983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:17:15.470722 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:17:15.472195 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:17:15.490269 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:17:15.499677 systemd-journald[1539]: Time spent on flushing to /var/log/journal/ec28db66bcef93c0c011ab126bd22bd3 is 93.582ms for 985 entries. Apr 13 20:17:15.499677 systemd-journald[1539]: System Journal (/var/log/journal/ec28db66bcef93c0c011ab126bd22bd3) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:17:15.603211 systemd-journald[1539]: Received client request to flush runtime journal. Apr 13 20:17:15.603270 kernel: loop0: detected capacity change from 0 to 61336 Apr 13 20:17:15.520577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:17:15.522425 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:17:15.530980 udevadm[1592]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:17:15.538917 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:17:15.607563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:17:15.610775 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:17:15.627948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:17:15.629138 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:17:15.634847 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Apr 13 20:17:15.634874 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Apr 13 20:17:15.646991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:17:15.657681 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:17:15.685446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:17:15.717071 kernel: loop1: detected capacity change from 0 to 140768 Apr 13 20:17:15.727102 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:17:15.737222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:17:15.771147 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Apr 13 20:17:15.771686 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Apr 13 20:17:15.793224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:17:15.813063 kernel: loop2: detected capacity change from 0 to 219192 Apr 13 20:17:15.950073 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 20:17:16.042099 kernel: loop4: detected capacity change from 0 to 61336 Apr 13 20:17:16.067735 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 20:17:16.102244 kernel: loop6: detected capacity change from 0 to 219192 Apr 13 20:17:16.160616 kernel: loop7: detected capacity change from 0 to 142488 Apr 13 20:17:16.189618 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 20:17:16.192231 (sd-merge)[1614]: Merged extensions into '/usr'. Apr 13 20:17:16.198445 systemd[1]: Reloading requested from client PID 1586 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:17:16.198463 systemd[1]: Reloading... Apr 13 20:17:16.319071 zram_generator::config[1637]: No configuration found. Apr 13 20:17:16.539031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:17:16.613195 systemd[1]: Reloading finished in 414 ms. Apr 13 20:17:16.641054 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:17:16.641855 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:17:16.650295 systemd[1]: Starting ensure-sysext.service... Apr 13 20:17:16.656412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:17:16.661220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:17:16.681535 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:17:16.682567 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:17:16.684132 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:17:16.684736 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Apr 13 20:17:16.684907 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Apr 13 20:17:16.688876 systemd[1]: Reloading requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:17:16.688896 systemd[1]: Reloading... Apr 13 20:17:16.694005 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:17:16.694026 systemd-tmpfiles[1693]: Skipping /boot Apr 13 20:17:16.731203 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:17:16.731220 systemd-tmpfiles[1693]: Skipping /boot Apr 13 20:17:16.732199 systemd-udevd[1694]: Using default interface naming scheme 'v255'. Apr 13 20:17:16.824061 zram_generator::config[1725]: No configuration found. Apr 13 20:17:16.979261 (udev-worker)[1734]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:17:17.072090 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1729) Apr 13 20:17:17.077099 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:17:17.115066 ldconfig[1581]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:17:17.137060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:17:17.154076 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:17:17.159155 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 13 20:17:17.168059 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 20:17:17.168339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:17:17.186134 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:17:17.299210 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:17:17.299683 systemd[1]: Reloading finished in 610 ms. Apr 13 20:17:17.316515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:17:17.318586 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:17:17.320700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:17:17.398063 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:17:17.428286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:17:17.434116 systemd[1]: Finished ensure-sysext.service. Apr 13 20:17:17.440409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:17:17.441252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:17:17.445303 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:17:17.452364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:17:17.454438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:17:17.458278 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:17:17.466730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:17:17.476291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:17:17.494834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:17:17.495957 lvm[1890]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:17:17.497523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:17:17.498966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:17:17.507384 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:17:17.517190 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:17:17.530244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:17:17.543866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:17:17.545611 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:17:17.556380 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:17:17.563367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:17:17.565125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:17:17.566341 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:17:17.568822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:17:17.569059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:17:17.570097 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:17:17.570289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:17:17.572171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:17:17.573003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:17:17.574190 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:17:17.574375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:17:17.581320 augenrules[1915]: No rules Apr 13 20:17:17.585766 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:17:17.587624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:17:17.598470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:17:17.608329 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:17:17.609022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:17:17.609120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:17:17.619394 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:17:17.630119 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:17:17.638167 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:17:17.654591 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:17:17.663219 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:17:17.677390 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:17:17.687251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:17:17.709196 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:17:17.712133 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:17:17.713547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:17:17.778117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:17:17.811116 systemd-networkd[1908]: lo: Link UP Apr 13 20:17:17.811515 systemd-networkd[1908]: lo: Gained carrier Apr 13 20:17:17.813693 systemd-networkd[1908]: Enumeration completed Apr 13 20:17:17.813978 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:17:17.814857 systemd-resolved[1909]: Positive Trust Anchors: Apr 13 20:17:17.814869 systemd-resolved[1909]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:17:17.814923 systemd-resolved[1909]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:17:17.816570 systemd-networkd[1908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:17:17.816579 systemd-networkd[1908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:17:17.819453 systemd-networkd[1908]: eth0: Link UP Apr 13 20:17:17.819645 systemd-networkd[1908]: eth0: Gained carrier Apr 13 20:17:17.819679 systemd-networkd[1908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:17:17.820246 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:17:17.827685 systemd-resolved[1909]: Defaulting to hostname 'linux'. Apr 13 20:17:17.829130 systemd-networkd[1908]: eth0: DHCPv4 address 172.31.17.102/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:17:17.831231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:17:17.832439 systemd[1]: Reached target network.target - Network. Apr 13 20:17:17.833089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:17:17.833714 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:17:17.834525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:17:17.835248 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:17:17.836103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:17:17.836975 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:17:17.837692 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:17:17.838443 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:17:17.838563 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:17:17.839178 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:17:17.840298 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:17:17.842287 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:17:17.847747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:17:17.848860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:17:17.849403 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:17:17.849808 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:17:17.850316 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:17:17.850358 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:17:17.851492 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:17:17.855288 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:17:17.859284 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:17:17.868709 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:17:17.872264 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:17:17.875407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:17:17.879738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:17:17.899455 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:17:17.919188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:17:17.922182 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:17:17.930239 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:17:17.938240 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:17:17.942508 extend-filesystems[1955]: Found loop4 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found loop5 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found loop6 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found loop7 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p1 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p2 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p3 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found usr Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p4 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p6 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p7 Apr 13 20:17:17.952529 extend-filesystems[1955]: Found nvme0n1p9 Apr 13 20:17:17.952529 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Apr 13 20:17:18.041471 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 20:17:17.950260 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:17:18.041622 jq[1954]: false Apr 13 20:17:18.039277 ntpd[1958]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:17:18.050297 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: ---------------------------------------------------- Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: corporation. Support and training for ntp-4 are Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: available at https://www.nwtime.org/support Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: ---------------------------------------------------- Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: proto: precision = 0.068 usec (-24) Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: basedate set to 2026-04-01 Apr 13 20:17:18.050901 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: gps base set to 2026-04-05 (week 2413) Apr 13 20:17:17.951683 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:17:18.039304 ntpd[1958]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:17:18.058792 extend-filesystems[1980]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:17:17.953450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:17:18.039315 ntpd[1958]: ---------------------------------------------------- Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listen normally on 3 eth0 172.31.17.102:123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listen normally on 4 lo [::1]:123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: bind(21) AF_INET6 fe80::496:dfff:fe21:5bd3%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: unable to create socket on eth0 (5) for fe80::496:dfff:fe21:5bd3%2#123 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: failed to init interface for address fe80::496:dfff:fe21:5bd3%2 Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: Listening on routing socket on fd #21 for interface updates Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:17:18.059908 ntpd[1958]: 13 Apr 20:17:18 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:17:17.954951 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:17:18.039325 ntpd[1958]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:17:17.981221 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:17:18.039337 ntpd[1958]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:17:17.989593 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:17:18.039347 ntpd[1958]: corporation. Support and training for ntp-4 are Apr 13 20:17:17.991097 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:17:18.061727 update_engine[1968]: I20260413 20:17:18.053885 1968 main.cc:92] Flatcar Update Engine starting Apr 13 20:17:18.039357 ntpd[1958]: available at https://www.nwtime.org/support Apr 13 20:17:18.034196 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:17:18.039368 ntpd[1958]: ---------------------------------------------------- Apr 13 20:17:18.035266 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:17:18.045090 ntpd[1958]: proto: precision = 0.068 usec (-24) Apr 13 20:17:18.047276 ntpd[1958]: basedate set to 2026-04-01 Apr 13 20:17:18.047298 ntpd[1958]: gps base set to 2026-04-05 (week 2413) Apr 13 20:17:18.055226 ntpd[1958]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:17:18.055281 ntpd[1958]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:17:18.055537 ntpd[1958]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:17:18.055594 ntpd[1958]: Listen normally on 3 eth0 172.31.17.102:123 Apr 13 20:17:18.055641 ntpd[1958]: Listen normally on 4 lo [::1]:123 Apr 13 20:17:18.055693 ntpd[1958]: bind(21) AF_INET6 fe80::496:dfff:fe21:5bd3%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:17:18.055722 ntpd[1958]: unable to create socket on eth0 (5) for fe80::496:dfff:fe21:5bd3%2#123 Apr 13 20:17:18.055740 ntpd[1958]: failed to init interface for address fe80::496:dfff:fe21:5bd3%2 Apr 13 20:17:18.055778 ntpd[1958]: Listening on routing socket on fd #21 for interface updates Apr 13 20:17:18.057910 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:17:18.057939 ntpd[1958]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:17:18.074780 dbus-daemon[1953]: [system] SELinux support is enabled Apr 13 20:17:18.075645 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:17:18.084868 jq[1970]: true Apr 13 20:17:18.092752 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1908 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:17:18.094586 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:17:18.094956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:17:18.095889 update_engine[1968]: I20260413 20:17:18.095659 1968 update_check_scheduler.cc:74] Next update check in 11m33s Apr 13 20:17:18.108577 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:17:18.109730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:17:18.111359 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:17:18.111393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:17:18.113098 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:17:18.115191 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:17:18.123824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:17:18.131126 tar[1974]: linux-amd64/LICENSE Apr 13 20:17:18.131126 tar[1974]: linux-amd64/helm Apr 13 20:17:18.129513 (ntainerd)[1996]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:17:18.133250 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:17:18.150902 jq[1995]: true Apr 13 20:17:18.171095 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:17:18.269067 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 20:17:18.278058 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1755) Apr 13 20:17:18.286015 coreos-metadata[1952]: Apr 13 20:17:18.285 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:17:18.288742 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 20:17:18.288742 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:17:18.288742 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 20:17:18.304621 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.299 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.302 INFO Fetch successful Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.302 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.303 INFO Fetch successful Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.304 INFO Fetch successful Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.306 INFO Fetch successful Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.307 INFO Fetch failed with 404: resource not found Apr 13 20:17:18.307457 coreos-metadata[1952]: Apr 13 20:17:18.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 20:17:18.290306 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:17:18.296305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:17:18.309799 coreos-metadata[1952]: Apr 13 20:17:18.308 INFO Fetch successful Apr 13 20:17:18.309799 coreos-metadata[1952]: Apr 13 20:17:18.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 20:17:18.309799 coreos-metadata[1952]: Apr 13 20:17:18.309 INFO Fetch successful Apr 13 20:17:18.309799 coreos-metadata[1952]: Apr 13 20:17:18.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 20:17:18.312279 coreos-metadata[1952]: Apr 13 20:17:18.311 INFO Fetch successful Apr 13 20:17:18.312279 coreos-metadata[1952]: Apr 13 20:17:18.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 20:17:18.313159 coreos-metadata[1952]: Apr 13 20:17:18.313 INFO Fetch successful Apr 13 20:17:18.313159 coreos-metadata[1952]: Apr 13 20:17:18.313 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 20:17:18.316843 coreos-metadata[1952]: Apr 13 20:17:18.315 INFO Fetch successful Apr 13 20:17:18.332402 systemd-logind[1966]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:17:18.332435 systemd-logind[1966]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 13 20:17:18.332457 systemd-logind[1966]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:17:18.338767 systemd-logind[1966]: New seat seat0. Apr 13 20:17:18.349275 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:17:18.358975 bash[2030]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:17:18.360477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:17:18.382277 systemd[1]: Starting sshkeys.service... Apr 13 20:17:18.432162 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:17:18.433616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:17:18.446479 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:17:18.456267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:17:18.620579 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:17:18.620778 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:17:18.621192 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2005 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:17:18.652614 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:17:18.724262 polkitd[2071]: Started polkitd version 121 Apr 13 20:17:18.729997 coreos-metadata[2047]: Apr 13 20:17:18.727 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:17:18.731435 coreos-metadata[2047]: Apr 13 20:17:18.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 20:17:18.736280 coreos-metadata[2047]: Apr 13 20:17:18.734 INFO Fetch successful Apr 13 20:17:18.736280 coreos-metadata[2047]: Apr 13 20:17:18.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 20:17:18.740382 coreos-metadata[2047]: Apr 13 20:17:18.740 INFO Fetch successful Apr 13 20:17:18.746166 unknown[2047]: wrote ssh authorized keys file for user: core Apr 13 20:17:18.754848 polkitd[2071]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:17:18.754943 polkitd[2071]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:17:18.775515 polkitd[2071]: Finished loading, compiling and executing 2 rules Apr 13 20:17:18.792256 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:17:18.793872 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:17:18.796000 polkitd[2071]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:17:18.836302 update-ssh-keys[2114]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:17:18.841205 sshd_keygen[1994]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:17:18.842695 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:17:18.849965 systemd[1]: Finished sshkeys.service. Apr 13 20:17:18.856162 locksmithd[2004]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:17:18.920152 systemd-hostnamed[2005]: Hostname set to (transient) Apr 13 20:17:18.920281 systemd-resolved[1909]: System hostname changed to 'ip-172-31-17-102'. Apr 13 20:17:18.981660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:17:18.999430 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:17:19.025337 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:17:19.026088 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:17:19.038224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:17:19.042590 ntpd[1958]: bind(24) AF_INET6 fe80::496:dfff:fe21:5bd3%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:17:19.048061 ntpd[1958]: 13 Apr 20:17:19 ntpd[1958]: bind(24) AF_INET6 fe80::496:dfff:fe21:5bd3%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:17:19.048061 ntpd[1958]: 13 Apr 20:17:19 ntpd[1958]: unable to create socket on eth0 (6) for fe80::496:dfff:fe21:5bd3%2#123 Apr 13 20:17:19.048061 ntpd[1958]: 13 Apr 20:17:19 ntpd[1958]: failed to init interface for address fe80::496:dfff:fe21:5bd3%2 Apr 13 20:17:19.042652 ntpd[1958]: unable to create socket on eth0 (6) for fe80::496:dfff:fe21:5bd3%2#123 Apr 13 20:17:19.042670 ntpd[1958]: failed to init interface for address fe80::496:dfff:fe21:5bd3%2 Apr 13 20:17:19.055076 containerd[1996]: time="2026-04-13T20:17:19.054055579Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:17:19.074203 systemd-networkd[1908]: eth0: Gained IPv6LL Apr 13 20:17:19.076657 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:17:19.085509 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:17:19.089472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:17:19.091392 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:17:19.093955 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:17:19.096544 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:17:19.104167 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 20:17:19.114358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:19.125264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:17:19.127506 containerd[1996]: time="2026-04-13T20:17:19.127452375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.129840 containerd[1996]: time="2026-04-13T20:17:19.129789909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:17:19.130091 containerd[1996]: time="2026-04-13T20:17:19.130068786Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:17:19.130183 containerd[1996]: time="2026-04-13T20:17:19.130166908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:17:19.130447 containerd[1996]: time="2026-04-13T20:17:19.130425606Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:17:19.130536 containerd[1996]: time="2026-04-13T20:17:19.130522554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.130686 containerd[1996]: time="2026-04-13T20:17:19.130665726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:17:19.130762 containerd[1996]: time="2026-04-13T20:17:19.130747305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131076 containerd[1996]: time="2026-04-13T20:17:19.131031741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131160 containerd[1996]: time="2026-04-13T20:17:19.131147196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131238 containerd[1996]: time="2026-04-13T20:17:19.131221494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131315 containerd[1996]: time="2026-04-13T20:17:19.131301726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131489 containerd[1996]: time="2026-04-13T20:17:19.131473042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.131810 containerd[1996]: time="2026-04-13T20:17:19.131788954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:17:19.132066 containerd[1996]: time="2026-04-13T20:17:19.132020663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:17:19.132147 containerd[1996]: time="2026-04-13T20:17:19.132132766Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:17:19.132326 containerd[1996]: time="2026-04-13T20:17:19.132308077Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:17:19.132444 containerd[1996]: time="2026-04-13T20:17:19.132428960Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.143764627Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.143843651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.143878245Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.143900665Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.143921810Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144105563Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144438843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144573960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144597853Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144617937Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144646800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144666361Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144683966Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146107 containerd[1996]: time="2026-04-13T20:17:19.144703650Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144724128Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144744201Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144764501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144783496Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144810569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144829937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144847200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144867938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144886457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144906654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144937278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144958111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.144979186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.146620 containerd[1996]: time="2026-04-13T20:17:19.145003065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145023187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145060519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145084263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145119215Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145152459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145171291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145190627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145258523Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145284597Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145302717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145320972Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145335803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145358904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:17:19.147122 containerd[1996]: time="2026-04-13T20:17:19.145378286Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:17:19.147634 containerd[1996]: time="2026-04-13T20:17:19.145396854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:17:19.147678 containerd[1996]: time="2026-04-13T20:17:19.145785331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:17:19.147678 containerd[1996]: time="2026-04-13T20:17:19.145880766Z" level=info msg="Connect containerd service" Apr 13 20:17:19.147678 containerd[1996]: time="2026-04-13T20:17:19.145934291Z" level=info msg="using legacy CRI server" Apr 13 20:17:19.147678 containerd[1996]: time="2026-04-13T20:17:19.145944222Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.148518008Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.149363904Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.149772020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.149833944Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.149907686Z" level=info msg="Start subscribing containerd event" Apr 13 20:17:19.150056 containerd[1996]: time="2026-04-13T20:17:19.149966862Z" level=info msg="Start recovering state" Apr 13 20:17:19.150410 containerd[1996]: time="2026-04-13T20:17:19.150387061Z" level=info msg="Start event monitor" Apr 13 20:17:19.150520 containerd[1996]: time="2026-04-13T20:17:19.150504847Z" level=info msg="Start snapshots syncer" Apr 13 20:17:19.150629 containerd[1996]: time="2026-04-13T20:17:19.150612446Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:17:19.150725 containerd[1996]: time="2026-04-13T20:17:19.150709934Z" level=info msg="Start streaming server" Apr 13 20:17:19.150900 containerd[1996]: time="2026-04-13T20:17:19.150885511Z" level=info msg="containerd successfully booted in 0.098090s" Apr 13 20:17:19.154683 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:17:19.213055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:17:19.244236 amazon-ssm-agent[2171]: Initializing new seelog logger Apr 13 20:17:19.245697 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Apr 13 20:17:19.245697 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.245697 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.245697 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 processing appconfig overrides Apr 13 20:17:19.246725 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.246827 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.246974 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 processing appconfig overrides Apr 13 20:17:19.247380 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.247454 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.247587 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 processing appconfig overrides Apr 13 20:17:19.248152 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO Proxy environment variables: Apr 13 20:17:19.251157 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.251327 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:17:19.251481 amazon-ssm-agent[2171]: 2026/04/13 20:17:19 processing appconfig overrides Apr 13 20:17:19.350060 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO https_proxy: Apr 13 20:17:19.442132 tar[1974]: linux-amd64/README.md Apr 13 20:17:19.447070 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO http_proxy: Apr 13 20:17:19.460465 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:17:19.546667 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO no_proxy: Apr 13 20:17:19.645587 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO Checking if agent identity type OnPrem can be assumed Apr 13 20:17:19.693670 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO Checking if agent identity type EC2 can be assumed Apr 13 20:17:19.693670 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO Agent will take identity from EC2 Apr 13 20:17:19.693670 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:17:19.693670 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [Registrar] Starting registrar module Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [EC2Identity] EC2 registration was successful. Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [CredentialRefresher] credentialRefresher has started Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 20:17:19.694116 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 20:17:19.744377 amazon-ssm-agent[2171]: 2026-04-13 20:17:19 INFO [CredentialRefresher] Next credential rotation will be in 30.641659318316666 minutes Apr 13 20:17:20.709551 amazon-ssm-agent[2171]: 2026-04-13 20:17:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 20:17:20.810482 amazon-ssm-agent[2171]: 2026-04-13 20:17:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2196) started Apr 13 20:17:20.910748 amazon-ssm-agent[2171]: 2026-04-13 20:17:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 20:17:21.411116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:21.413293 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:17:21.414947 systemd[1]: Startup finished in 608ms (kernel) + 6.286s (initrd) + 7.472s (userspace) = 14.366s. Apr 13 20:17:21.429794 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:17:22.039809 ntpd[1958]: Listen normally on 7 eth0 [fe80::496:dfff:fe21:5bd3%2]:123 Apr 13 20:17:22.040318 ntpd[1958]: 13 Apr 20:17:22 ntpd[1958]: Listen normally on 7 eth0 [fe80::496:dfff:fe21:5bd3%2]:123 Apr 13 20:17:22.450870 kubelet[2212]: E0413 20:17:22.450813 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:17:22.453382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:17:22.453616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:17:22.454157 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Apr 13 20:17:22.640911 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:17:22.647450 systemd[1]: Started sshd@0-172.31.17.102:22-50.85.169.122:48396.service - OpenSSH per-connection server daemon (50.85.169.122:48396). Apr 13 20:17:23.676575 sshd[2224]: Accepted publickey for core from 50.85.169.122 port 48396 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:23.678577 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:23.688086 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:17:23.700500 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:17:23.703916 systemd-logind[1966]: New session 1 of user core. Apr 13 20:17:23.716958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:17:23.723456 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:17:23.733349 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:17:23.855142 systemd[2228]: Queued start job for default target default.target. Apr 13 20:17:23.863389 systemd[2228]: Created slice app.slice - User Application Slice. Apr 13 20:17:23.863432 systemd[2228]: Reached target paths.target - Paths. Apr 13 20:17:23.863454 systemd[2228]: Reached target timers.target - Timers. Apr 13 20:17:23.864914 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:17:23.876814 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:17:23.877750 systemd[2228]: Reached target sockets.target - Sockets. Apr 13 20:17:23.877774 systemd[2228]: Reached target basic.target - Basic System. Apr 13 20:17:23.877839 systemd[2228]: Reached target default.target - Main User Target. Apr 13 20:17:23.877881 systemd[2228]: Startup finished in 136ms. Apr 13 20:17:23.878076 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:17:23.886290 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:17:24.593438 systemd[1]: Started sshd@1-172.31.17.102:22-50.85.169.122:48410.service - OpenSSH per-connection server daemon (50.85.169.122:48410). Apr 13 20:17:25.601144 systemd-resolved[1909]: Clock change detected. Flushing caches. Apr 13 20:17:26.106712 sshd[2239]: Accepted publickey for core from 50.85.169.122 port 48410 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:26.108339 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:26.113419 systemd-logind[1966]: New session 2 of user core. Apr 13 20:17:26.120525 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:17:26.774411 sshd[2239]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:26.778878 systemd-logind[1966]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:17:26.779963 systemd[1]: sshd@1-172.31.17.102:22-50.85.169.122:48410.service: Deactivated successfully. Apr 13 20:17:26.782145 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:17:26.783092 systemd-logind[1966]: Removed session 2. Apr 13 20:17:26.949555 systemd[1]: Started sshd@2-172.31.17.102:22-50.85.169.122:48418.service - OpenSSH per-connection server daemon (50.85.169.122:48418). Apr 13 20:17:27.931331 sshd[2246]: Accepted publickey for core from 50.85.169.122 port 48418 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:27.931977 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:27.936415 systemd-logind[1966]: New session 3 of user core. Apr 13 20:17:27.947484 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:17:28.611001 sshd[2246]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:28.615198 systemd-logind[1966]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:17:28.616157 systemd[1]: sshd@2-172.31.17.102:22-50.85.169.122:48418.service: Deactivated successfully. Apr 13 20:17:28.618185 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:17:28.619131 systemd-logind[1966]: Removed session 3. Apr 13 20:17:28.772475 systemd[1]: Started sshd@3-172.31.17.102:22-50.85.169.122:48434.service - OpenSSH per-connection server daemon (50.85.169.122:48434). Apr 13 20:17:29.719620 sshd[2253]: Accepted publickey for core from 50.85.169.122 port 48434 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:29.721124 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:29.726254 systemd-logind[1966]: New session 4 of user core. Apr 13 20:17:29.731472 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:17:30.380127 sshd[2253]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:30.384489 systemd-logind[1966]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:17:30.385617 systemd[1]: sshd@3-172.31.17.102:22-50.85.169.122:48434.service: Deactivated successfully. Apr 13 20:17:30.387741 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:17:30.388748 systemd-logind[1966]: Removed session 4. Apr 13 20:17:30.567455 systemd[1]: Started sshd@4-172.31.17.102:22-50.85.169.122:33722.service - OpenSSH per-connection server daemon (50.85.169.122:33722). Apr 13 20:17:31.588357 sshd[2260]: Accepted publickey for core from 50.85.169.122 port 33722 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:31.589042 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:31.594157 systemd-logind[1966]: New session 5 of user core. Apr 13 20:17:31.600468 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:17:32.158106 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:17:32.158536 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:17:32.173951 sudo[2263]: pam_unix(sudo:session): session closed for user root Apr 13 20:17:32.340988 sshd[2260]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:32.345808 systemd[1]: sshd@4-172.31.17.102:22-50.85.169.122:33722.service: Deactivated successfully. Apr 13 20:17:32.347922 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:17:32.348692 systemd-logind[1966]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:17:32.349863 systemd-logind[1966]: Removed session 5. Apr 13 20:17:32.502547 systemd[1]: Started sshd@5-172.31.17.102:22-50.85.169.122:33736.service - OpenSSH per-connection server daemon (50.85.169.122:33736). Apr 13 20:17:33.254128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:17:33.261539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:33.447278 sshd[2268]: Accepted publickey for core from 50.85.169.122 port 33736 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:33.447907 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:33.455015 systemd-logind[1966]: New session 6 of user core. Apr 13 20:17:33.461400 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:17:33.534445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:33.537317 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:17:33.580422 kubelet[2279]: E0413 20:17:33.580369 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:17:33.584373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:17:33.584584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:17:33.952032 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:17:33.952463 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:17:33.956459 sudo[2287]: pam_unix(sudo:session): session closed for user root Apr 13 20:17:33.961993 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:17:33.962486 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:17:33.976604 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:17:33.979592 auditctl[2290]: No rules Apr 13 20:17:33.980000 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:17:33.980247 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:17:33.983054 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:17:34.022118 augenrules[2308]: No rules Apr 13 20:17:34.023684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:17:34.025006 sudo[2286]: pam_unix(sudo:session): session closed for user root Apr 13 20:17:34.179946 sshd[2268]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:34.183636 systemd[1]: sshd@5-172.31.17.102:22-50.85.169.122:33736.service: Deactivated successfully. Apr 13 20:17:34.185673 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:17:34.187172 systemd-logind[1966]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:17:34.188365 systemd-logind[1966]: Removed session 6. Apr 13 20:17:34.368591 systemd[1]: Started sshd@6-172.31.17.102:22-50.85.169.122:33742.service - OpenSSH per-connection server daemon (50.85.169.122:33742). Apr 13 20:17:35.385342 sshd[2316]: Accepted publickey for core from 50.85.169.122 port 33742 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:17:35.386174 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:17:35.391327 systemd-logind[1966]: New session 7 of user core. Apr 13 20:17:35.397470 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:17:35.926893 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:17:35.927315 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:17:36.460575 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:17:36.462643 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:17:37.063555 dockerd[2335]: time="2026-04-13T20:17:37.063487659Z" level=info msg="Starting up" Apr 13 20:17:37.243554 dockerd[2335]: time="2026-04-13T20:17:37.243502294Z" level=info msg="Loading containers: start." Apr 13 20:17:37.408578 kernel: Initializing XFRM netlink socket Apr 13 20:17:37.488484 (udev-worker)[2357]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:17:37.558346 systemd-networkd[1908]: docker0: Link UP Apr 13 20:17:37.581043 dockerd[2335]: time="2026-04-13T20:17:37.580994493Z" level=info msg="Loading containers: done." Apr 13 20:17:37.614578 dockerd[2335]: time="2026-04-13T20:17:37.614515341Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:17:37.614856 dockerd[2335]: time="2026-04-13T20:17:37.614649964Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:17:37.614856 dockerd[2335]: time="2026-04-13T20:17:37.614808220Z" level=info msg="Daemon has completed initialization" Apr 13 20:17:37.652107 dockerd[2335]: time="2026-04-13T20:17:37.651466400Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:17:37.651591 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:17:38.591846 containerd[1996]: time="2026-04-13T20:17:38.591807054Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 20:17:39.199285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843028054.mount: Deactivated successfully. Apr 13 20:17:40.955813 containerd[1996]: time="2026-04-13T20:17:40.955754621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:40.957391 containerd[1996]: time="2026-04-13T20:17:40.957088291Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947742" Apr 13 20:17:40.960227 containerd[1996]: time="2026-04-13T20:17:40.958800784Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:40.962830 containerd[1996]: time="2026-04-13T20:17:40.962785139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:40.964121 containerd[1996]: time="2026-04-13T20:17:40.964079751Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 2.372233237s" Apr 13 20:17:40.964275 containerd[1996]: time="2026-04-13T20:17:40.964253264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 13 20:17:40.965313 containerd[1996]: time="2026-04-13T20:17:40.965282108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 20:17:42.778023 containerd[1996]: time="2026-04-13T20:17:42.777894560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:42.784114 containerd[1996]: time="2026-04-13T20:17:42.783883648Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165812" Apr 13 20:17:42.792454 containerd[1996]: time="2026-04-13T20:17:42.791826822Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:42.796313 containerd[1996]: time="2026-04-13T20:17:42.795821980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:42.797137 containerd[1996]: time="2026-04-13T20:17:42.797091120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 1.831678686s" Apr 13 20:17:42.797250 containerd[1996]: time="2026-04-13T20:17:42.797143807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 13 20:17:42.797860 containerd[1996]: time="2026-04-13T20:17:42.797702687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 20:17:43.754073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:17:43.762499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:44.008479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:44.011718 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:17:44.073062 kubelet[2548]: E0413 20:17:44.072662 2548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:17:44.075379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:17:44.075581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:17:44.412495 containerd[1996]: time="2026-04-13T20:17:44.411925249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:44.413745 containerd[1996]: time="2026-04-13T20:17:44.413671468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729847" Apr 13 20:17:44.415271 containerd[1996]: time="2026-04-13T20:17:44.415197464Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:44.422078 containerd[1996]: time="2026-04-13T20:17:44.420803398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 1.623058818s" Apr 13 20:17:44.422078 containerd[1996]: time="2026-04-13T20:17:44.420882982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 13 20:17:44.422078 containerd[1996]: time="2026-04-13T20:17:44.420918381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:44.422078 containerd[1996]: time="2026-04-13T20:17:44.421412679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 20:17:45.544542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260784198.mount: Deactivated successfully. Apr 13 20:17:45.939530 containerd[1996]: time="2026-04-13T20:17:45.939397882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:45.940703 containerd[1996]: time="2026-04-13T20:17:45.940627071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861774" Apr 13 20:17:45.942375 containerd[1996]: time="2026-04-13T20:17:45.942311008Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:45.945246 containerd[1996]: time="2026-04-13T20:17:45.944851965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:45.946240 containerd[1996]: time="2026-04-13T20:17:45.945586254Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.524141123s" Apr 13 20:17:45.946240 containerd[1996]: time="2026-04-13T20:17:45.945628253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 20:17:45.946704 containerd[1996]: time="2026-04-13T20:17:45.946675190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 20:17:46.446561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200768482.mount: Deactivated successfully. Apr 13 20:17:47.734283 containerd[1996]: time="2026-04-13T20:17:47.734226309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:47.735751 containerd[1996]: time="2026-04-13T20:17:47.735699326Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Apr 13 20:17:47.737244 containerd[1996]: time="2026-04-13T20:17:47.737016130Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:47.742062 containerd[1996]: time="2026-04-13T20:17:47.741833396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:47.743371 containerd[1996]: time="2026-04-13T20:17:47.743317973Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.796607658s" Apr 13 20:17:47.743371 containerd[1996]: time="2026-04-13T20:17:47.743371394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 20:17:47.744525 containerd[1996]: time="2026-04-13T20:17:47.743914545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:17:48.339610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584870925.mount: Deactivated successfully. Apr 13 20:17:48.347559 containerd[1996]: time="2026-04-13T20:17:48.347514018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:48.348578 containerd[1996]: time="2026-04-13T20:17:48.348504926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 13 20:17:48.351361 containerd[1996]: time="2026-04-13T20:17:48.350822455Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:48.354393 containerd[1996]: time="2026-04-13T20:17:48.354347720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:48.355426 containerd[1996]: time="2026-04-13T20:17:48.355383964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 611.432822ms" Apr 13 20:17:48.355546 containerd[1996]: time="2026-04-13T20:17:48.355430193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:17:48.356061 containerd[1996]: time="2026-04-13T20:17:48.356033402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 20:17:48.878827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62688771.mount: Deactivated successfully. Apr 13 20:17:49.489198 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:17:50.019556 containerd[1996]: time="2026-04-13T20:17:50.019499879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:50.021101 containerd[1996]: time="2026-04-13T20:17:50.020886931Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874231" Apr 13 20:17:50.022956 containerd[1996]: time="2026-04-13T20:17:50.022525526Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:50.025908 containerd[1996]: time="2026-04-13T20:17:50.025859992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:50.027521 containerd[1996]: time="2026-04-13T20:17:50.027480037Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.671417634s" Apr 13 20:17:50.027521 containerd[1996]: time="2026-04-13T20:17:50.027518651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 20:17:54.125079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:17:54.130906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:54.148621 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:17:54.148748 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:17:54.149426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:54.156745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:54.191503 systemd[1]: Reloading requested from client PID 2715 ('systemctl') (unit session-7.scope)... Apr 13 20:17:54.191667 systemd[1]: Reloading... Apr 13 20:17:54.317240 zram_generator::config[2758]: No configuration found. Apr 13 20:17:54.460293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:17:54.554179 systemd[1]: Reloading finished in 361 ms. Apr 13 20:17:54.608768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:17:54.608926 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:17:54.609296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:54.615574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:17:55.050661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:17:55.057508 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:17:55.132744 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:17:55.132744 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:17:55.135159 kubelet[2815]: I0413 20:17:55.134765 2815 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:17:55.410989 kubelet[2815]: I0413 20:17:55.410579 2815 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:17:55.410989 kubelet[2815]: I0413 20:17:55.410606 2815 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:17:55.410989 kubelet[2815]: I0413 20:17:55.410635 2815 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:17:55.410989 kubelet[2815]: I0413 20:17:55.410643 2815 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:17:55.411326 kubelet[2815]: I0413 20:17:55.411142 2815 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:17:55.422241 kubelet[2815]: I0413 20:17:55.421309 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:17:55.424489 kubelet[2815]: E0413 20:17:55.424448 2815 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:17:55.431841 kubelet[2815]: E0413 20:17:55.431791 2815 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:17:55.432009 kubelet[2815]: I0413 20:17:55.431868 2815 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:17:55.434858 kubelet[2815]: I0413 20:17:55.434818 2815 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:17:55.435983 kubelet[2815]: I0413 20:17:55.435940 2815 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:17:55.436190 kubelet[2815]: I0413 20:17:55.435982 2815 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-102","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:17:55.436332 kubelet[2815]: I0413 20:17:55.436194 2815 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:17:55.436332 kubelet[2815]: I0413 20:17:55.436223 2815 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:17:55.436484 kubelet[2815]: I0413 20:17:55.436346 2815 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:17:55.438760 kubelet[2815]: I0413 20:17:55.438739 2815 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:17:55.438978 kubelet[2815]: I0413 20:17:55.438960 2815 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:17:55.439047 kubelet[2815]: I0413 20:17:55.438982 2815 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:17:55.439047 kubelet[2815]: I0413 20:17:55.439011 2815 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:17:55.439047 kubelet[2815]: I0413 20:17:55.439027 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:17:55.444234 kubelet[2815]: E0413 20:17:55.442415 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:17:55.444234 kubelet[2815]: E0413 20:17:55.442542 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-102&limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:17:55.444234 kubelet[2815]: I0413 20:17:55.443092 2815 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:17:55.444234 kubelet[2815]: I0413 20:17:55.444017 2815 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:17:55.444234 kubelet[2815]: I0413 20:17:55.444062 2815 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:17:55.444234 kubelet[2815]: W0413 20:17:55.444144 2815 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:17:55.448177 kubelet[2815]: I0413 20:17:55.448138 2815 server.go:1262] "Started kubelet" Apr 13 20:17:55.451173 kubelet[2815]: I0413 20:17:55.451035 2815 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:17:55.453231 kubelet[2815]: I0413 20:17:55.452950 2815 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:17:55.453655 kubelet[2815]: I0413 20:17:55.453619 2815 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:17:55.453739 kubelet[2815]: I0413 20:17:55.453687 2815 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:17:55.454303 kubelet[2815]: I0413 20:17:55.454259 2815 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:17:55.457489 kubelet[2815]: E0413 20:17:55.454435 2815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.102:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-102.18a603fa0794502e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-102,UID:ip-172-31-17-102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-102,},FirstTimestamp:2026-04-13 20:17:55.448107054 +0000 UTC m=+0.361688031,LastTimestamp:2026-04-13 20:17:55.448107054 +0000 UTC m=+0.361688031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-102,}" Apr 13 20:17:55.461937 kubelet[2815]: I0413 20:17:55.461659 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:17:55.465541 kubelet[2815]: I0413 20:17:55.462693 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:17:55.469411 kubelet[2815]: E0413 20:17:55.469386 2815 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:17:55.469612 kubelet[2815]: E0413 20:17:55.469597 2815 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-102\" not found" Apr 13 20:17:55.469727 kubelet[2815]: I0413 20:17:55.469715 2815 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:17:55.470074 kubelet[2815]: I0413 20:17:55.470055 2815 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:17:55.470220 kubelet[2815]: I0413 20:17:55.470197 2815 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:17:55.470788 kubelet[2815]: E0413 20:17:55.470763 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:17:55.471104 kubelet[2815]: I0413 20:17:55.471088 2815 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:17:55.471295 kubelet[2815]: I0413 20:17:55.471275 2815 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:17:55.472267 kubelet[2815]: E0413 20:17:55.472225 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": dial tcp 172.31.17.102:6443: connect: connection refused" interval="200ms" Apr 13 20:17:55.473752 kubelet[2815]: I0413 20:17:55.473734 2815 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:17:55.502568 kubelet[2815]: I0413 20:17:55.502543 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:17:55.502744 kubelet[2815]: I0413 20:17:55.502729 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:17:55.502850 kubelet[2815]: I0413 20:17:55.502840 2815 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:17:55.506128 kubelet[2815]: I0413 20:17:55.506098 2815 policy_none.go:49] "None policy: Start" Apr 13 20:17:55.506128 kubelet[2815]: I0413 20:17:55.506123 2815 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:17:55.506322 kubelet[2815]: I0413 20:17:55.506136 2815 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:17:55.508995 kubelet[2815]: I0413 20:17:55.508618 2815 policy_none.go:47] "Start" Apr 13 20:17:55.515376 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:17:55.518885 kubelet[2815]: I0413 20:17:55.518722 2815 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:17:55.520818 kubelet[2815]: I0413 20:17:55.520380 2815 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:17:55.520818 kubelet[2815]: I0413 20:17:55.520404 2815 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:17:55.520818 kubelet[2815]: I0413 20:17:55.520435 2815 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:17:55.520818 kubelet[2815]: E0413 20:17:55.520503 2815 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:17:55.526521 kubelet[2815]: E0413 20:17:55.526443 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:17:55.535117 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:17:55.545428 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:17:55.547971 kubelet[2815]: E0413 20:17:55.547297 2815 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:17:55.547971 kubelet[2815]: I0413 20:17:55.547536 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:17:55.547971 kubelet[2815]: I0413 20:17:55.547550 2815 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:17:55.547971 kubelet[2815]: I0413 20:17:55.547837 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:17:55.550253 kubelet[2815]: E0413 20:17:55.550201 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:17:55.550342 kubelet[2815]: E0413 20:17:55.550277 2815 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-102\" not found" Apr 13 20:17:55.639974 systemd[1]: Created slice kubepods-burstable-pod0d6f9cad7a5eedba2b4e0319c22d43a5.slice - libcontainer container kubepods-burstable-pod0d6f9cad7a5eedba2b4e0319c22d43a5.slice. Apr 13 20:17:55.650686 kubelet[2815]: E0413 20:17:55.650245 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:55.650686 kubelet[2815]: I0413 20:17:55.650293 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:17:55.651288 kubelet[2815]: E0413 20:17:55.651258 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.102:6443/api/v1/nodes\": dial tcp 172.31.17.102:6443: connect: connection refused" node="ip-172-31-17-102" Apr 13 20:17:55.653699 systemd[1]: Created slice kubepods-burstable-podfa1e3fde6e76199620f48bd40ddf59ba.slice - libcontainer container kubepods-burstable-podfa1e3fde6e76199620f48bd40ddf59ba.slice. Apr 13 20:17:55.656605 kubelet[2815]: E0413 20:17:55.656577 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:55.659801 systemd[1]: Created slice kubepods-burstable-pod63451c1e3686d6fd7333bfeaf1505c49.slice - libcontainer container kubepods-burstable-pod63451c1e3686d6fd7333bfeaf1505c49.slice. Apr 13 20:17:55.662884 kubelet[2815]: E0413 20:17:55.662135 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:55.674004 kubelet[2815]: E0413 20:17:55.673819 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": dial tcp 172.31.17.102:6443: connect: connection refused" interval="400ms" Apr 13 20:17:55.772485 kubelet[2815]: I0413 20:17:55.772373 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:17:55.772485 kubelet[2815]: I0413 20:17:55.772424 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:17:55.772485 kubelet[2815]: I0413 20:17:55.772472 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:17:55.772485 kubelet[2815]: I0413 20:17:55.772497 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:17:55.772909 kubelet[2815]: I0413 20:17:55.772522 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:17:55.772909 kubelet[2815]: I0413 20:17:55.772548 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa1e3fde6e76199620f48bd40ddf59ba-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-102\" (UID: \"fa1e3fde6e76199620f48bd40ddf59ba\") " pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:17:55.772909 kubelet[2815]: I0413 20:17:55.772582 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-ca-certs\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:17:55.772909 kubelet[2815]: I0413 20:17:55.772620 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:17:55.772909 kubelet[2815]: I0413 20:17:55.772649 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:17:55.852902 kubelet[2815]: I0413 20:17:55.852871 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:17:55.853256 kubelet[2815]: E0413 20:17:55.853201 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.102:6443/api/v1/nodes\": dial tcp 172.31.17.102:6443: connect: connection refused" node="ip-172-31-17-102" Apr 13 20:17:55.954167 containerd[1996]: time="2026-04-13T20:17:55.954047226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-102,Uid:0d6f9cad7a5eedba2b4e0319c22d43a5,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:55.960517 containerd[1996]: time="2026-04-13T20:17:55.960231434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-102,Uid:fa1e3fde6e76199620f48bd40ddf59ba,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:55.964917 containerd[1996]: time="2026-04-13T20:17:55.964881434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-102,Uid:63451c1e3686d6fd7333bfeaf1505c49,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:56.074641 kubelet[2815]: E0413 20:17:56.074597 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": dial tcp 172.31.17.102:6443: connect: connection refused" interval="800ms" Apr 13 20:17:56.255387 kubelet[2815]: I0413 20:17:56.255327 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:17:56.255823 kubelet[2815]: E0413 20:17:56.255710 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.102:6443/api/v1/nodes\": dial tcp 172.31.17.102:6443: connect: connection refused" node="ip-172-31-17-102" Apr 13 20:17:56.339811 kubelet[2815]: E0413 20:17:56.339763 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:17:56.504178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319819307.mount: Deactivated successfully. Apr 13 20:17:56.515790 containerd[1996]: time="2026-04-13T20:17:56.515653221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:17:56.517094 containerd[1996]: time="2026-04-13T20:17:56.517028846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:17:56.518245 containerd[1996]: time="2026-04-13T20:17:56.517986703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 13 20:17:56.519531 containerd[1996]: time="2026-04-13T20:17:56.519484031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:17:56.520771 containerd[1996]: time="2026-04-13T20:17:56.520685059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:17:56.522418 containerd[1996]: time="2026-04-13T20:17:56.522381523Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:17:56.523346 containerd[1996]: time="2026-04-13T20:17:56.523266984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:17:56.525616 containerd[1996]: time="2026-04-13T20:17:56.525522453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:17:56.527183 containerd[1996]: time="2026-04-13T20:17:56.527144337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.011211ms" Apr 13 20:17:56.528553 containerd[1996]: time="2026-04-13T20:17:56.528405401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.072022ms" Apr 13 20:17:56.532053 containerd[1996]: time="2026-04-13T20:17:56.532018332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.064872ms" Apr 13 20:17:56.764275 kubelet[2815]: E0413 20:17:56.763473 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:17:56.765782 containerd[1996]: time="2026-04-13T20:17:56.765399379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:56.765782 containerd[1996]: time="2026-04-13T20:17:56.765484136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:56.765782 containerd[1996]: time="2026-04-13T20:17:56.765520464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.765782 containerd[1996]: time="2026-04-13T20:17:56.765628561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.774550 containerd[1996]: time="2026-04-13T20:17:56.774384466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:56.774833 containerd[1996]: time="2026-04-13T20:17:56.774790279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:56.775362 containerd[1996]: time="2026-04-13T20:17:56.775020156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.775362 containerd[1996]: time="2026-04-13T20:17:56.775156041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.777712 containerd[1996]: time="2026-04-13T20:17:56.777406504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:56.777712 containerd[1996]: time="2026-04-13T20:17:56.777489180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:56.777712 containerd[1996]: time="2026-04-13T20:17:56.777516406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.777712 containerd[1996]: time="2026-04-13T20:17:56.777627867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:56.806440 systemd[1]: Started cri-containerd-51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52.scope - libcontainer container 51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52. Apr 13 20:17:56.822434 systemd[1]: Started cri-containerd-efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18.scope - libcontainer container efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18. Apr 13 20:17:56.834544 systemd[1]: Started cri-containerd-de846af62a4d51d344eec17c7caf5b03b6ea865590f29b2f297e707b03f9b864.scope - libcontainer container de846af62a4d51d344eec17c7caf5b03b6ea865590f29b2f297e707b03f9b864. Apr 13 20:17:56.876254 kubelet[2815]: E0413 20:17:56.875643 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": dial tcp 172.31.17.102:6443: connect: connection refused" interval="1.6s" Apr 13 20:17:56.917348 kubelet[2815]: E0413 20:17:56.917265 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:17:56.928432 kubelet[2815]: E0413 20:17:56.928382 2815 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-102&limit=500&resourceVersion=0\": dial tcp 172.31.17.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:17:56.931384 containerd[1996]: time="2026-04-13T20:17:56.930966756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-102,Uid:63451c1e3686d6fd7333bfeaf1505c49,Namespace:kube-system,Attempt:0,} returns sandbox id \"de846af62a4d51d344eec17c7caf5b03b6ea865590f29b2f297e707b03f9b864\"" Apr 13 20:17:56.942992 containerd[1996]: time="2026-04-13T20:17:56.942368414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-102,Uid:0d6f9cad7a5eedba2b4e0319c22d43a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52\"" Apr 13 20:17:56.942992 containerd[1996]: time="2026-04-13T20:17:56.942735989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-102,Uid:fa1e3fde6e76199620f48bd40ddf59ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18\"" Apr 13 20:17:56.952692 containerd[1996]: time="2026-04-13T20:17:56.952536504Z" level=info msg="CreateContainer within sandbox \"efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:17:56.952692 containerd[1996]: time="2026-04-13T20:17:56.952536638Z" level=info msg="CreateContainer within sandbox \"de846af62a4d51d344eec17c7caf5b03b6ea865590f29b2f297e707b03f9b864\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:17:56.953233 containerd[1996]: time="2026-04-13T20:17:56.953054604Z" level=info msg="CreateContainer within sandbox \"51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:17:57.022045 containerd[1996]: time="2026-04-13T20:17:57.021780204Z" level=info msg="CreateContainer within sandbox \"efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6\"" Apr 13 20:17:57.023639 containerd[1996]: time="2026-04-13T20:17:57.023595726Z" level=info msg="CreateContainer within sandbox \"de846af62a4d51d344eec17c7caf5b03b6ea865590f29b2f297e707b03f9b864\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ac11bb9d0ec8a765e495a6416350efa261a01351521e125942ecb84c30b7ad31\"" Apr 13 20:17:57.023898 containerd[1996]: time="2026-04-13T20:17:57.023865575Z" level=info msg="StartContainer for \"d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6\"" Apr 13 20:17:57.027057 containerd[1996]: time="2026-04-13T20:17:57.026764142Z" level=info msg="CreateContainer within sandbox \"51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299\"" Apr 13 20:17:57.029788 containerd[1996]: time="2026-04-13T20:17:57.027786858Z" level=info msg="StartContainer for \"71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299\"" Apr 13 20:17:57.039490 containerd[1996]: time="2026-04-13T20:17:57.039438952Z" level=info msg="StartContainer for \"ac11bb9d0ec8a765e495a6416350efa261a01351521e125942ecb84c30b7ad31\"" Apr 13 20:17:57.058202 kubelet[2815]: I0413 20:17:57.058161 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:17:57.058762 kubelet[2815]: E0413 20:17:57.058717 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.102:6443/api/v1/nodes\": dial tcp 172.31.17.102:6443: connect: connection refused" node="ip-172-31-17-102" Apr 13 20:17:57.084467 systemd[1]: Started cri-containerd-d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6.scope - libcontainer container d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6. Apr 13 20:17:57.090299 systemd[1]: Started cri-containerd-71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299.scope - libcontainer container 71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299. Apr 13 20:17:57.104487 systemd[1]: Started cri-containerd-ac11bb9d0ec8a765e495a6416350efa261a01351521e125942ecb84c30b7ad31.scope - libcontainer container ac11bb9d0ec8a765e495a6416350efa261a01351521e125942ecb84c30b7ad31. Apr 13 20:17:57.186091 containerd[1996]: time="2026-04-13T20:17:57.186037591Z" level=info msg="StartContainer for \"d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6\" returns successfully" Apr 13 20:17:57.202240 containerd[1996]: time="2026-04-13T20:17:57.201087587Z" level=info msg="StartContainer for \"71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299\" returns successfully" Apr 13 20:17:57.218330 containerd[1996]: time="2026-04-13T20:17:57.218102275Z" level=info msg="StartContainer for \"ac11bb9d0ec8a765e495a6416350efa261a01351521e125942ecb84c30b7ad31\" returns successfully" Apr 13 20:17:57.553240 kubelet[2815]: E0413 20:17:57.552627 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:57.564317 kubelet[2815]: E0413 20:17:57.564069 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:57.564944 kubelet[2815]: E0413 20:17:57.564927 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:58.565763 kubelet[2815]: E0413 20:17:58.565563 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:58.567279 kubelet[2815]: E0413 20:17:58.567113 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:17:58.661906 kubelet[2815]: I0413 20:17:58.661737 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:17:59.569240 kubelet[2815]: E0413 20:17:59.567986 2815 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:18:00.112451 kubelet[2815]: E0413 20:18:00.112412 2815 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-102\" not found" node="ip-172-31-17-102" Apr 13 20:18:00.344378 kubelet[2815]: I0413 20:18:00.344051 2815 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-102" Apr 13 20:18:00.372054 kubelet[2815]: I0413 20:18:00.371945 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:00.382908 kubelet[2815]: E0413 20:18:00.382861 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-102\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:00.382908 kubelet[2815]: I0413 20:18:00.382898 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:00.385345 kubelet[2815]: E0413 20:18:00.385300 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-102\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:00.385345 kubelet[2815]: I0413 20:18:00.385331 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:00.387438 kubelet[2815]: E0413 20:18:00.387405 2815 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-102\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:00.445527 kubelet[2815]: I0413 20:18:00.445479 2815 apiserver.go:52] "Watching apiserver" Apr 13 20:18:00.470555 kubelet[2815]: I0413 20:18:00.470520 2815 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:18:01.069416 kubelet[2815]: I0413 20:18:01.069382 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:02.147877 kubelet[2815]: I0413 20:18:02.143713 2815 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:03.610430 update_engine[1968]: I20260413 20:18:03.610347 1968 update_attempter.cc:509] Updating boot flags... Apr 13 20:18:03.744535 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3117) Apr 13 20:18:03.965920 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3123) Apr 13 20:18:04.090975 systemd[1]: Reloading requested from client PID 3286 ('systemctl') (unit session-7.scope)... Apr 13 20:18:04.090994 systemd[1]: Reloading... Apr 13 20:18:04.258233 zram_generator::config[3330]: No configuration found. Apr 13 20:18:04.455021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:18:04.580415 systemd[1]: Reloading finished in 488 ms. Apr 13 20:18:04.654873 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:18:04.678926 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:18:04.679272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:18:04.685598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:18:05.400351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:18:05.418791 (kubelet)[3387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:18:05.497952 kubelet[3387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:18:05.497952 kubelet[3387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:18:05.497952 kubelet[3387]: I0413 20:18:05.494703 3387 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:18:05.510255 kubelet[3387]: I0413 20:18:05.509665 3387 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:18:05.510255 kubelet[3387]: I0413 20:18:05.509693 3387 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:18:05.510255 kubelet[3387]: I0413 20:18:05.509726 3387 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:18:05.510255 kubelet[3387]: I0413 20:18:05.509738 3387 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:18:05.510255 kubelet[3387]: I0413 20:18:05.510069 3387 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:18:05.516260 kubelet[3387]: I0413 20:18:05.516061 3387 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:18:05.525696 kubelet[3387]: I0413 20:18:05.525583 3387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:18:05.534735 kubelet[3387]: E0413 20:18:05.534645 3387 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:18:05.534735 kubelet[3387]: I0413 20:18:05.534735 3387 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:18:05.539454 kubelet[3387]: I0413 20:18:05.539424 3387 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:18:05.540617 kubelet[3387]: I0413 20:18:05.540499 3387 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:18:05.540799 kubelet[3387]: I0413 20:18:05.540565 3387 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-102","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:18:05.540941 kubelet[3387]: I0413 20:18:05.540800 3387 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:18:05.540941 kubelet[3387]: I0413 20:18:05.540815 3387 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:18:05.540941 kubelet[3387]: I0413 20:18:05.540855 3387 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:18:05.541121 kubelet[3387]: I0413 20:18:05.541099 3387 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:18:05.542737 kubelet[3387]: I0413 20:18:05.542707 3387 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:18:05.542823 kubelet[3387]: I0413 20:18:05.542745 3387 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:18:05.542823 kubelet[3387]: I0413 20:18:05.542776 3387 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:18:05.542823 kubelet[3387]: I0413 20:18:05.542799 3387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:18:05.550052 kubelet[3387]: I0413 20:18:05.548874 3387 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:18:05.550052 kubelet[3387]: I0413 20:18:05.549697 3387 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:18:05.550052 kubelet[3387]: I0413 20:18:05.549745 3387 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:18:05.559822 kubelet[3387]: I0413 20:18:05.559797 3387 server.go:1262] "Started kubelet" Apr 13 20:18:05.563239 kubelet[3387]: I0413 20:18:05.562675 3387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:18:05.569957 kubelet[3387]: I0413 20:18:05.569906 3387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:18:05.575973 kubelet[3387]: I0413 20:18:05.574595 3387 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:18:05.592463 kubelet[3387]: I0413 20:18:05.578632 3387 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:18:05.592463 kubelet[3387]: I0413 20:18:05.591988 3387 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:18:05.592463 kubelet[3387]: I0413 20:18:05.592059 3387 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:18:05.592463 kubelet[3387]: I0413 20:18:05.592278 3387 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:18:05.602811 kubelet[3387]: I0413 20:18:05.602775 3387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:18:05.608585 kubelet[3387]: I0413 20:18:05.578647 3387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:18:05.608585 kubelet[3387]: I0413 20:18:05.608241 3387 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:18:05.608585 kubelet[3387]: E0413 20:18:05.578883 3387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-102\" not found" Apr 13 20:18:05.614468 kubelet[3387]: E0413 20:18:05.614424 3387 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:18:05.615101 kubelet[3387]: I0413 20:18:05.615053 3387 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:18:05.616985 kubelet[3387]: I0413 20:18:05.615199 3387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:18:05.624513 kubelet[3387]: I0413 20:18:05.620024 3387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:18:05.624513 kubelet[3387]: I0413 20:18:05.621479 3387 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:18:05.624513 kubelet[3387]: I0413 20:18:05.622662 3387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:18:05.624513 kubelet[3387]: I0413 20:18:05.622681 3387 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:18:05.624513 kubelet[3387]: I0413 20:18:05.622711 3387 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:18:05.624513 kubelet[3387]: E0413 20:18:05.622759 3387 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:18:05.698006 kubelet[3387]: I0413 20:18:05.695917 3387 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:18:05.698180 kubelet[3387]: I0413 20:18:05.698159 3387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:18:05.698297 kubelet[3387]: I0413 20:18:05.698286 3387 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:18:05.698579 kubelet[3387]: I0413 20:18:05.698562 3387 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:18:05.698707 kubelet[3387]: I0413 20:18:05.698680 3387 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:18:05.698774 kubelet[3387]: I0413 20:18:05.698766 3387 policy_none.go:49] "None policy: Start" Apr 13 20:18:05.698838 kubelet[3387]: I0413 20:18:05.698830 3387 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:18:05.698901 kubelet[3387]: I0413 20:18:05.698892 3387 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:18:05.699160 kubelet[3387]: I0413 20:18:05.699147 3387 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:18:05.699268 kubelet[3387]: I0413 20:18:05.699258 3387 policy_none.go:47] "Start" Apr 13 20:18:05.710644 kubelet[3387]: E0413 20:18:05.708848 3387 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:18:05.710644 kubelet[3387]: I0413 20:18:05.709032 3387 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:18:05.710644 kubelet[3387]: I0413 20:18:05.709043 3387 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:18:05.710644 kubelet[3387]: I0413 20:18:05.709630 3387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:18:05.714620 kubelet[3387]: E0413 20:18:05.714594 3387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:18:05.726295 kubelet[3387]: I0413 20:18:05.726266 3387 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:05.728934 kubelet[3387]: I0413 20:18:05.728910 3387 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.731158 kubelet[3387]: I0413 20:18:05.729646 3387 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:05.747853 kubelet[3387]: E0413 20:18:05.747724 3387 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-102\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.750286 kubelet[3387]: E0413 20:18:05.750263 3387 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-102\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:05.836961 kubelet[3387]: I0413 20:18:05.835932 3387 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-102" Apr 13 20:18:05.845694 kubelet[3387]: I0413 20:18:05.845666 3387 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-102" Apr 13 20:18:05.846266 kubelet[3387]: I0413 20:18:05.846068 3387 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-102" Apr 13 20:18:05.908965 kubelet[3387]: I0413 20:18:05.908908 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa1e3fde6e76199620f48bd40ddf59ba-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-102\" (UID: \"fa1e3fde6e76199620f48bd40ddf59ba\") " pod="kube-system/kube-scheduler-ip-172-31-17-102" Apr 13 20:18:05.908965 kubelet[3387]: I0413 20:18:05.908960 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-ca-certs\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:05.909184 kubelet[3387]: I0413 20:18:05.908985 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:05.909184 kubelet[3387]: I0413 20:18:05.909008 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.909184 kubelet[3387]: I0413 20:18:05.909049 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.909184 kubelet[3387]: I0413 20:18:05.909072 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.909184 kubelet[3387]: I0413 20:18:05.909093 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63451c1e3686d6fd7333bfeaf1505c49-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-102\" (UID: \"63451c1e3686d6fd7333bfeaf1505c49\") " pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:05.909420 kubelet[3387]: I0413 20:18:05.909130 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:05.909420 kubelet[3387]: I0413 20:18:05.909156 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d6f9cad7a5eedba2b4e0319c22d43a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-102\" (UID: \"0d6f9cad7a5eedba2b4e0319c22d43a5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-102" Apr 13 20:18:06.546941 kubelet[3387]: I0413 20:18:06.546887 3387 apiserver.go:52] "Watching apiserver" Apr 13 20:18:06.608634 kubelet[3387]: I0413 20:18:06.608582 3387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:18:06.667886 kubelet[3387]: I0413 20:18:06.667855 3387 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:06.677758 kubelet[3387]: E0413 20:18:06.677704 3387 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-102\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-102" Apr 13 20:18:06.705591 kubelet[3387]: I0413 20:18:06.704620 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-102" podStartSLOduration=5.704601587 podStartE2EDuration="5.704601587s" podCreationTimestamp="2026-04-13 20:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:18:06.703863582 +0000 UTC m=+1.273271301" watchObservedRunningTime="2026-04-13 20:18:06.704601587 +0000 UTC m=+1.274009304" Apr 13 20:18:06.740752 kubelet[3387]: I0413 20:18:06.740680 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-102" podStartSLOduration=4.740662474 podStartE2EDuration="4.740662474s" podCreationTimestamp="2026-04-13 20:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:18:06.718437365 +0000 UTC m=+1.287845085" watchObservedRunningTime="2026-04-13 20:18:06.740662474 +0000 UTC m=+1.310070192" Apr 13 20:18:06.754234 kubelet[3387]: I0413 20:18:06.754008 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-102" podStartSLOduration=1.7539877910000001 podStartE2EDuration="1.753987791s" podCreationTimestamp="2026-04-13 20:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:18:06.741782415 +0000 UTC m=+1.311190136" watchObservedRunningTime="2026-04-13 20:18:06.753987791 +0000 UTC m=+1.323395504" Apr 13 20:18:09.355988 kubelet[3387]: I0413 20:18:09.355950 3387 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:18:09.356473 containerd[1996]: time="2026-04-13T20:18:09.356421245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:18:09.357067 kubelet[3387]: I0413 20:18:09.356688 3387 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:18:10.367340 systemd[1]: Created slice kubepods-besteffort-podcae62251_13d0_49df_89f6_397c83f8d7de.slice - libcontainer container kubepods-besteffort-podcae62251_13d0_49df_89f6_397c83f8d7de.slice. Apr 13 20:18:10.440801 kubelet[3387]: I0413 20:18:10.440753 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cae62251-13d0-49df-89f6-397c83f8d7de-xtables-lock\") pod \"kube-proxy-gmgrk\" (UID: \"cae62251-13d0-49df-89f6-397c83f8d7de\") " pod="kube-system/kube-proxy-gmgrk" Apr 13 20:18:10.440801 kubelet[3387]: I0413 20:18:10.440796 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5m8\" (UniqueName: \"kubernetes.io/projected/cae62251-13d0-49df-89f6-397c83f8d7de-kube-api-access-8b5m8\") pod \"kube-proxy-gmgrk\" (UID: \"cae62251-13d0-49df-89f6-397c83f8d7de\") " pod="kube-system/kube-proxy-gmgrk" Apr 13 20:18:10.441322 kubelet[3387]: I0413 20:18:10.440827 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cae62251-13d0-49df-89f6-397c83f8d7de-lib-modules\") pod \"kube-proxy-gmgrk\" (UID: \"cae62251-13d0-49df-89f6-397c83f8d7de\") " pod="kube-system/kube-proxy-gmgrk" Apr 13 20:18:10.441322 kubelet[3387]: I0413 20:18:10.440850 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cae62251-13d0-49df-89f6-397c83f8d7de-kube-proxy\") pod \"kube-proxy-gmgrk\" (UID: \"cae62251-13d0-49df-89f6-397c83f8d7de\") " pod="kube-system/kube-proxy-gmgrk" Apr 13 20:18:10.542728 kubelet[3387]: I0413 20:18:10.541632 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2cb2e1fa-a6a6-495a-a093-d82e618e945c-var-lib-calico\") pod \"tigera-operator-5588576f44-z6mc7\" (UID: \"2cb2e1fa-a6a6-495a-a093-d82e618e945c\") " pod="tigera-operator/tigera-operator-5588576f44-z6mc7" Apr 13 20:18:10.542728 kubelet[3387]: I0413 20:18:10.541687 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bnf\" (UniqueName: \"kubernetes.io/projected/2cb2e1fa-a6a6-495a-a093-d82e618e945c-kube-api-access-89bnf\") pod \"tigera-operator-5588576f44-z6mc7\" (UID: \"2cb2e1fa-a6a6-495a-a093-d82e618e945c\") " pod="tigera-operator/tigera-operator-5588576f44-z6mc7" Apr 13 20:18:10.543064 systemd[1]: Created slice kubepods-besteffort-pod2cb2e1fa_a6a6_495a_a093_d82e618e945c.slice - libcontainer container kubepods-besteffort-pod2cb2e1fa_a6a6_495a_a093_d82e618e945c.slice. Apr 13 20:18:10.680908 containerd[1996]: time="2026-04-13T20:18:10.680346769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmgrk,Uid:cae62251-13d0-49df-89f6-397c83f8d7de,Namespace:kube-system,Attempt:0,}" Apr 13 20:18:10.715479 containerd[1996]: time="2026-04-13T20:18:10.714772348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:18:10.715479 containerd[1996]: time="2026-04-13T20:18:10.714863784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:18:10.715479 containerd[1996]: time="2026-04-13T20:18:10.714892207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:10.715479 containerd[1996]: time="2026-04-13T20:18:10.715014042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:10.753459 systemd[1]: Started cri-containerd-4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb.scope - libcontainer container 4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb. Apr 13 20:18:10.783159 containerd[1996]: time="2026-04-13T20:18:10.783105685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmgrk,Uid:cae62251-13d0-49df-89f6-397c83f8d7de,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb\"" Apr 13 20:18:10.792063 containerd[1996]: time="2026-04-13T20:18:10.791773296Z" level=info msg="CreateContainer within sandbox \"4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:18:10.817549 containerd[1996]: time="2026-04-13T20:18:10.817490133Z" level=info msg="CreateContainer within sandbox \"4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ee96aa1b4df8d2ead6c15a683aba9e275847791c9cc0df34b380c99a086b678\"" Apr 13 20:18:10.819511 containerd[1996]: time="2026-04-13T20:18:10.819439553Z" level=info msg="StartContainer for \"9ee96aa1b4df8d2ead6c15a683aba9e275847791c9cc0df34b380c99a086b678\"" Apr 13 20:18:10.851633 systemd[1]: Started cri-containerd-9ee96aa1b4df8d2ead6c15a683aba9e275847791c9cc0df34b380c99a086b678.scope - libcontainer container 9ee96aa1b4df8d2ead6c15a683aba9e275847791c9cc0df34b380c99a086b678. Apr 13 20:18:10.854847 containerd[1996]: time="2026-04-13T20:18:10.854819902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-z6mc7,Uid:2cb2e1fa-a6a6-495a-a093-d82e618e945c,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:18:10.894242 containerd[1996]: time="2026-04-13T20:18:10.893960687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:18:10.896732 containerd[1996]: time="2026-04-13T20:18:10.896611097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:18:10.896732 containerd[1996]: time="2026-04-13T20:18:10.896642135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:10.897006 containerd[1996]: time="2026-04-13T20:18:10.896751484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:10.902025 containerd[1996]: time="2026-04-13T20:18:10.901795554Z" level=info msg="StartContainer for \"9ee96aa1b4df8d2ead6c15a683aba9e275847791c9cc0df34b380c99a086b678\" returns successfully" Apr 13 20:18:10.925500 systemd[1]: Started cri-containerd-3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a.scope - libcontainer container 3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a. Apr 13 20:18:11.000164 containerd[1996]: time="2026-04-13T20:18:10.999741179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-z6mc7,Uid:2cb2e1fa-a6a6-495a-a093-d82e618e945c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a\"" Apr 13 20:18:11.004077 containerd[1996]: time="2026-04-13T20:18:11.003804850Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:18:11.565939 systemd[1]: run-containerd-runc-k8s.io-4bdabedf46414940957c7afabd48261fbedf1e8ae1a84ae77f41eae940428bdb-runc.vK4b7X.mount: Deactivated successfully. Apr 13 20:18:11.703311 kubelet[3387]: I0413 20:18:11.703239 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gmgrk" podStartSLOduration=1.703194067 podStartE2EDuration="1.703194067s" podCreationTimestamp="2026-04-13 20:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:18:11.692771173 +0000 UTC m=+6.262178894" watchObservedRunningTime="2026-04-13 20:18:11.703194067 +0000 UTC m=+6.272601844" Apr 13 20:18:12.123996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300386652.mount: Deactivated successfully. Apr 13 20:18:13.631098 containerd[1996]: time="2026-04-13T20:18:13.631041984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:13.632708 containerd[1996]: time="2026-04-13T20:18:13.632508242Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:18:13.635481 containerd[1996]: time="2026-04-13T20:18:13.634152838Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:13.637855 containerd[1996]: time="2026-04-13T20:18:13.636899930Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:13.637855 containerd[1996]: time="2026-04-13T20:18:13.637647218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.633797597s" Apr 13 20:18:13.637855 containerd[1996]: time="2026-04-13T20:18:13.637686444Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:18:13.643410 containerd[1996]: time="2026-04-13T20:18:13.643363663Z" level=info msg="CreateContainer within sandbox \"3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:18:13.667376 containerd[1996]: time="2026-04-13T20:18:13.667332097Z" level=info msg="CreateContainer within sandbox \"3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04\"" Apr 13 20:18:13.669163 containerd[1996]: time="2026-04-13T20:18:13.668081898Z" level=info msg="StartContainer for \"6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04\"" Apr 13 20:18:13.709424 systemd[1]: Started cri-containerd-6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04.scope - libcontainer container 6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04. Apr 13 20:18:13.742047 containerd[1996]: time="2026-04-13T20:18:13.741995849Z" level=info msg="StartContainer for \"6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04\" returns successfully" Apr 13 20:18:20.888112 sudo[2319]: pam_unix(sudo:session): session closed for user root Apr 13 20:18:21.058088 sshd[2316]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:21.064411 systemd[1]: sshd@6-172.31.17.102:22-50.85.169.122:33742.service: Deactivated successfully. Apr 13 20:18:21.068250 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:18:21.068470 systemd[1]: session-7.scope: Consumed 6.746s CPU time, 155.2M memory peak, 0B memory swap peak. Apr 13 20:18:21.069938 systemd-logind[1966]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:18:21.072696 systemd-logind[1966]: Removed session 7. Apr 13 20:18:25.099384 kubelet[3387]: I0413 20:18:25.099305 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-z6mc7" podStartSLOduration=12.463395116 podStartE2EDuration="15.099284421s" podCreationTimestamp="2026-04-13 20:18:10 +0000 UTC" firstStartedPulling="2026-04-13 20:18:11.00302836 +0000 UTC m=+5.572436062" lastFinishedPulling="2026-04-13 20:18:13.638917669 +0000 UTC m=+8.208325367" observedRunningTime="2026-04-13 20:18:14.720623392 +0000 UTC m=+9.290031112" watchObservedRunningTime="2026-04-13 20:18:25.099284421 +0000 UTC m=+19.668692141" Apr 13 20:18:25.115269 systemd[1]: Created slice kubepods-besteffort-pod7d81188d_25b2_4071_a9b6_e5234a13250b.slice - libcontainer container kubepods-besteffort-pod7d81188d_25b2_4071_a9b6_e5234a13250b.slice. Apr 13 20:18:25.143379 kubelet[3387]: I0413 20:18:25.143328 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcltw\" (UniqueName: \"kubernetes.io/projected/7d81188d-25b2-4071-a9b6-e5234a13250b-kube-api-access-tcltw\") pod \"calico-typha-8575cdb477-l59x7\" (UID: \"7d81188d-25b2-4071-a9b6-e5234a13250b\") " pod="calico-system/calico-typha-8575cdb477-l59x7" Apr 13 20:18:25.143533 kubelet[3387]: I0413 20:18:25.143391 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d81188d-25b2-4071-a9b6-e5234a13250b-tigera-ca-bundle\") pod \"calico-typha-8575cdb477-l59x7\" (UID: \"7d81188d-25b2-4071-a9b6-e5234a13250b\") " pod="calico-system/calico-typha-8575cdb477-l59x7" Apr 13 20:18:25.143533 kubelet[3387]: I0413 20:18:25.143412 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d81188d-25b2-4071-a9b6-e5234a13250b-typha-certs\") pod \"calico-typha-8575cdb477-l59x7\" (UID: \"7d81188d-25b2-4071-a9b6-e5234a13250b\") " pod="calico-system/calico-typha-8575cdb477-l59x7" Apr 13 20:18:25.222117 systemd[1]: Created slice kubepods-besteffort-pod9c56abab_9748_4062_b1d7_3298c1497464.slice - libcontainer container kubepods-besteffort-pod9c56abab_9748_4062_b1d7_3298c1497464.slice. Apr 13 20:18:25.244308 kubelet[3387]: I0413 20:18:25.244263 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-lib-modules\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.244470 kubelet[3387]: I0413 20:18:25.244370 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-var-run-calico\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.244812 kubelet[3387]: I0413 20:18:25.244560 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-cni-net-dir\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.244812 kubelet[3387]: I0413 20:18:25.244595 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-xtables-lock\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.244812 kubelet[3387]: I0413 20:18:25.244723 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c56abab-9748-4062-b1d7-3298c1497464-node-certs\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.244812 kubelet[3387]: I0413 20:18:25.244748 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-sys-fs\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.245046 kubelet[3387]: I0413 20:18:25.244772 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c56abab-9748-4062-b1d7-3298c1497464-tigera-ca-bundle\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.245046 kubelet[3387]: I0413 20:18:25.244909 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-cni-log-dir\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.246374 kubelet[3387]: I0413 20:18:25.245077 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-nodeproc\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.246374 kubelet[3387]: I0413 20:18:25.245115 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-policysync\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.246374 kubelet[3387]: I0413 20:18:25.245242 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gg7\" (UniqueName: \"kubernetes.io/projected/9c56abab-9748-4062-b1d7-3298c1497464-kube-api-access-g7gg7\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.246374 kubelet[3387]: I0413 20:18:25.245283 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-bpffs\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.250982 kubelet[3387]: I0413 20:18:25.250945 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-cni-bin-dir\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.251133 kubelet[3387]: I0413 20:18:25.251003 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-flexvol-driver-host\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.251133 kubelet[3387]: I0413 20:18:25.251029 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c56abab-9748-4062-b1d7-3298c1497464-var-lib-calico\") pod \"calico-node-8w2wd\" (UID: \"9c56abab-9748-4062-b1d7-3298c1497464\") " pod="calico-system/calico-node-8w2wd" Apr 13 20:18:25.324049 kubelet[3387]: E0413 20:18:25.323603 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:25.354277 kubelet[3387]: I0413 20:18:25.351600 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c81d3c-90a0-440b-96eb-db49837fb4b5-kubelet-dir\") pod \"csi-node-driver-cdjzz\" (UID: \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\") " pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:25.354770 kubelet[3387]: I0413 20:18:25.354720 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f5c81d3c-90a0-440b-96eb-db49837fb4b5-socket-dir\") pod \"csi-node-driver-cdjzz\" (UID: \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\") " pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:25.354877 kubelet[3387]: I0413 20:18:25.354780 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2l6\" (UniqueName: \"kubernetes.io/projected/f5c81d3c-90a0-440b-96eb-db49837fb4b5-kube-api-access-zr2l6\") pod \"csi-node-driver-cdjzz\" (UID: \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\") " pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:25.354931 kubelet[3387]: I0413 20:18:25.354903 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f5c81d3c-90a0-440b-96eb-db49837fb4b5-registration-dir\") pod \"csi-node-driver-cdjzz\" (UID: \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\") " pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:25.356081 kubelet[3387]: I0413 20:18:25.355031 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f5c81d3c-90a0-440b-96eb-db49837fb4b5-varrun\") pod \"csi-node-driver-cdjzz\" (UID: \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\") " pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:25.367304 kubelet[3387]: E0413 20:18:25.367266 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.367441 kubelet[3387]: W0413 20:18:25.367300 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.367441 kubelet[3387]: E0413 20:18:25.367345 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.393356 kubelet[3387]: E0413 20:18:25.393309 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.394854 kubelet[3387]: W0413 20:18:25.394821 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.394976 kubelet[3387]: E0413 20:18:25.394863 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.425189 containerd[1996]: time="2026-04-13T20:18:25.425129261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8575cdb477-l59x7,Uid:7d81188d-25b2-4071-a9b6-e5234a13250b,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:25.457476 kubelet[3387]: E0413 20:18:25.456836 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.457476 kubelet[3387]: W0413 20:18:25.456863 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.457476 kubelet[3387]: E0413 20:18:25.456888 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.459228 kubelet[3387]: E0413 20:18:25.457887 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.459228 kubelet[3387]: W0413 20:18:25.457905 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.459228 kubelet[3387]: E0413 20:18:25.457927 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.460132 kubelet[3387]: E0413 20:18:25.459697 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.460132 kubelet[3387]: W0413 20:18:25.459713 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.460132 kubelet[3387]: E0413 20:18:25.459730 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.460132 kubelet[3387]: E0413 20:18:25.459994 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.460132 kubelet[3387]: W0413 20:18:25.460005 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.460132 kubelet[3387]: E0413 20:18:25.460020 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.460558 kubelet[3387]: E0413 20:18:25.460545 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.460643 kubelet[3387]: W0413 20:18:25.460633 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.460708 kubelet[3387]: E0413 20:18:25.460698 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.461263 kubelet[3387]: E0413 20:18:25.461249 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.461526 kubelet[3387]: W0413 20:18:25.461446 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.461526 kubelet[3387]: E0413 20:18:25.461465 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.462997 kubelet[3387]: E0413 20:18:25.462980 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.462997 kubelet[3387]: W0413 20:18:25.462998 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.464573 kubelet[3387]: E0413 20:18:25.463028 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.464871 kubelet[3387]: E0413 20:18:25.464855 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.465125 kubelet[3387]: W0413 20:18:25.464871 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.465125 kubelet[3387]: E0413 20:18:25.464886 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465206 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.465924 kubelet[3387]: W0413 20:18:25.465243 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465257 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465477 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.465924 kubelet[3387]: W0413 20:18:25.465488 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465501 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465896 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.465924 kubelet[3387]: W0413 20:18:25.465908 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.465924 kubelet[3387]: E0413 20:18:25.465921 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.466190 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.467395 kubelet[3387]: W0413 20:18:25.466201 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.466248 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.466644 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.467395 kubelet[3387]: W0413 20:18:25.466658 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.466671 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.467054 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.467395 kubelet[3387]: W0413 20:18:25.467065 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.467079 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.467395 kubelet[3387]: E0413 20:18:25.467370 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.471166 kubelet[3387]: W0413 20:18:25.467381 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.467394 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.467640 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.471166 kubelet[3387]: W0413 20:18:25.467649 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.467662 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.467927 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.471166 kubelet[3387]: W0413 20:18:25.467937 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.467959 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.471166 kubelet[3387]: E0413 20:18:25.468181 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.471166 kubelet[3387]: W0413 20:18:25.468191 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.468204 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.470110 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.472084 kubelet[3387]: W0413 20:18:25.470123 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.470138 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.471298 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.472084 kubelet[3387]: W0413 20:18:25.471311 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.471327 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.471674 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.472084 kubelet[3387]: W0413 20:18:25.471685 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.472084 kubelet[3387]: E0413 20:18:25.471699 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.473549 kubelet[3387]: E0413 20:18:25.473089 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.473549 kubelet[3387]: W0413 20:18:25.473103 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.473549 kubelet[3387]: E0413 20:18:25.473119 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.473549 kubelet[3387]: E0413 20:18:25.473371 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.473549 kubelet[3387]: W0413 20:18:25.473382 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.473549 kubelet[3387]: E0413 20:18:25.473398 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.474720 kubelet[3387]: E0413 20:18:25.474556 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.474720 kubelet[3387]: W0413 20:18:25.474571 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.474720 kubelet[3387]: E0413 20:18:25.474586 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.475383 kubelet[3387]: E0413 20:18:25.475174 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.475383 kubelet[3387]: W0413 20:18:25.475188 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.475383 kubelet[3387]: E0413 20:18:25.475201 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.485957 kubelet[3387]: E0413 20:18:25.485863 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:25.485957 kubelet[3387]: W0413 20:18:25.485888 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:25.485957 kubelet[3387]: E0413 20:18:25.485911 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:25.518690 containerd[1996]: time="2026-04-13T20:18:25.518554033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:18:25.518690 containerd[1996]: time="2026-04-13T20:18:25.518631972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:18:25.519197 containerd[1996]: time="2026-04-13T20:18:25.518687308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:25.519197 containerd[1996]: time="2026-04-13T20:18:25.518825612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:25.530222 containerd[1996]: time="2026-04-13T20:18:25.530064839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8w2wd,Uid:9c56abab-9748-4062-b1d7-3298c1497464,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:25.547822 systemd[1]: Started cri-containerd-d3f32d28a209eb08e4c7adde35d14a7181a2e887c6717a9c08cc688fc963435d.scope - libcontainer container d3f32d28a209eb08e4c7adde35d14a7181a2e887c6717a9c08cc688fc963435d. Apr 13 20:18:25.578138 containerd[1996]: time="2026-04-13T20:18:25.577646226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:18:25.578138 containerd[1996]: time="2026-04-13T20:18:25.577820365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:18:25.578138 containerd[1996]: time="2026-04-13T20:18:25.577837458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:25.578138 containerd[1996]: time="2026-04-13T20:18:25.577972949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:25.606415 systemd[1]: Started cri-containerd-6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc.scope - libcontainer container 6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc. Apr 13 20:18:25.644190 containerd[1996]: time="2026-04-13T20:18:25.644146100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8575cdb477-l59x7,Uid:7d81188d-25b2-4071-a9b6-e5234a13250b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3f32d28a209eb08e4c7adde35d14a7181a2e887c6717a9c08cc688fc963435d\"" Apr 13 20:18:25.652601 containerd[1996]: time="2026-04-13T20:18:25.650901431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:18:25.666903 containerd[1996]: time="2026-04-13T20:18:25.666868501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8w2wd,Uid:9c56abab-9748-4062-b1d7-3298c1497464,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\"" Apr 13 20:18:26.624248 kubelet[3387]: E0413 20:18:26.623718 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:28.171336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723577359.mount: Deactivated successfully. Apr 13 20:18:28.624297 kubelet[3387]: E0413 20:18:28.624250 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:29.072016 containerd[1996]: time="2026-04-13T20:18:29.071946376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:29.073578 containerd[1996]: time="2026-04-13T20:18:29.073494823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:18:29.075092 containerd[1996]: time="2026-04-13T20:18:29.075047404Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:29.079124 containerd[1996]: time="2026-04-13T20:18:29.078119046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:29.079124 containerd[1996]: time="2026-04-13T20:18:29.078970950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.428017275s" Apr 13 20:18:29.079124 containerd[1996]: time="2026-04-13T20:18:29.079013524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:18:29.084123 containerd[1996]: time="2026-04-13T20:18:29.084067371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:18:29.106314 containerd[1996]: time="2026-04-13T20:18:29.106266440Z" level=info msg="CreateContainer within sandbox \"d3f32d28a209eb08e4c7adde35d14a7181a2e887c6717a9c08cc688fc963435d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:18:29.160600 containerd[1996]: time="2026-04-13T20:18:29.160346352Z" level=info msg="CreateContainer within sandbox \"d3f32d28a209eb08e4c7adde35d14a7181a2e887c6717a9c08cc688fc963435d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"01dee1c85afa4d120bc0b4692071f407df78a78c06ea5c9e07ec85cd19821d13\"" Apr 13 20:18:29.163581 containerd[1996]: time="2026-04-13T20:18:29.163540461Z" level=info msg="StartContainer for \"01dee1c85afa4d120bc0b4692071f407df78a78c06ea5c9e07ec85cd19821d13\"" Apr 13 20:18:29.254466 systemd[1]: Started cri-containerd-01dee1c85afa4d120bc0b4692071f407df78a78c06ea5c9e07ec85cd19821d13.scope - libcontainer container 01dee1c85afa4d120bc0b4692071f407df78a78c06ea5c9e07ec85cd19821d13. Apr 13 20:18:29.304644 containerd[1996]: time="2026-04-13T20:18:29.304588616Z" level=info msg="StartContainer for \"01dee1c85afa4d120bc0b4692071f407df78a78c06ea5c9e07ec85cd19821d13\" returns successfully" Apr 13 20:18:29.772962 kubelet[3387]: E0413 20:18:29.772779 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.772962 kubelet[3387]: W0413 20:18:29.772813 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.772962 kubelet[3387]: E0413 20:18:29.772839 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.773793 kubelet[3387]: E0413 20:18:29.773292 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.774055 kubelet[3387]: W0413 20:18:29.773874 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.774055 kubelet[3387]: E0413 20:18:29.773904 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.774594 kubelet[3387]: E0413 20:18:29.774200 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.774594 kubelet[3387]: W0413 20:18:29.774236 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.774594 kubelet[3387]: E0413 20:18:29.774252 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.774594 kubelet[3387]: E0413 20:18:29.774520 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.774594 kubelet[3387]: W0413 20:18:29.774531 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.774991 kubelet[3387]: E0413 20:18:29.774835 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.775253 kubelet[3387]: E0413 20:18:29.775109 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.775253 kubelet[3387]: W0413 20:18:29.775120 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.775253 kubelet[3387]: E0413 20:18:29.775143 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.775481 kubelet[3387]: E0413 20:18:29.775470 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.775567 kubelet[3387]: W0413 20:18:29.775557 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.775731 kubelet[3387]: E0413 20:18:29.775638 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.775984 kubelet[3387]: E0413 20:18:29.775972 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.776153 kubelet[3387]: W0413 20:18:29.776054 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.776153 kubelet[3387]: E0413 20:18:29.776071 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.776578 kubelet[3387]: E0413 20:18:29.776448 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.776578 kubelet[3387]: W0413 20:18:29.776461 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.776578 kubelet[3387]: E0413 20:18:29.776474 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.776802 kubelet[3387]: E0413 20:18:29.776790 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.776997 kubelet[3387]: W0413 20:18:29.776886 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.776997 kubelet[3387]: E0413 20:18:29.776903 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.777237 kubelet[3387]: E0413 20:18:29.777176 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.777434 kubelet[3387]: W0413 20:18:29.777315 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.777434 kubelet[3387]: E0413 20:18:29.777335 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.777623 kubelet[3387]: E0413 20:18:29.777612 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.777896 kubelet[3387]: W0413 20:18:29.777772 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.777896 kubelet[3387]: E0413 20:18:29.777794 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.778083 kubelet[3387]: E0413 20:18:29.778071 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.778332 kubelet[3387]: W0413 20:18:29.778153 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.778332 kubelet[3387]: E0413 20:18:29.778187 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.778998 kubelet[3387]: E0413 20:18:29.778859 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.778998 kubelet[3387]: W0413 20:18:29.778873 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.778998 kubelet[3387]: E0413 20:18:29.778886 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.779247 kubelet[3387]: E0413 20:18:29.779206 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.779439 kubelet[3387]: W0413 20:18:29.779314 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.779439 kubelet[3387]: E0413 20:18:29.779334 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.779631 kubelet[3387]: E0413 20:18:29.779620 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.779707 kubelet[3387]: W0413 20:18:29.779696 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.779774 kubelet[3387]: E0413 20:18:29.779764 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.811425 kubelet[3387]: E0413 20:18:29.811391 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.811721 kubelet[3387]: W0413 20:18:29.811420 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.811721 kubelet[3387]: E0413 20:18:29.811685 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.813467 kubelet[3387]: E0413 20:18:29.813441 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.813467 kubelet[3387]: W0413 20:18:29.813465 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.813615 kubelet[3387]: E0413 20:18:29.813487 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.813996 kubelet[3387]: E0413 20:18:29.813978 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.813996 kubelet[3387]: W0413 20:18:29.813995 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.814112 kubelet[3387]: E0413 20:18:29.814012 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.815648 kubelet[3387]: E0413 20:18:29.815628 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.815648 kubelet[3387]: W0413 20:18:29.815645 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.815774 kubelet[3387]: E0413 20:18:29.815669 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.816000 kubelet[3387]: E0413 20:18:29.815984 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.816061 kubelet[3387]: W0413 20:18:29.816000 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.816061 kubelet[3387]: E0413 20:18:29.816024 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.816351 kubelet[3387]: E0413 20:18:29.816335 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.816410 kubelet[3387]: W0413 20:18:29.816357 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.816410 kubelet[3387]: E0413 20:18:29.816371 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.816695 kubelet[3387]: E0413 20:18:29.816672 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.816756 kubelet[3387]: W0413 20:18:29.816701 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.816756 kubelet[3387]: E0413 20:18:29.816716 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.817233 kubelet[3387]: E0413 20:18:29.816989 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.817233 kubelet[3387]: W0413 20:18:29.817001 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.817233 kubelet[3387]: E0413 20:18:29.817013 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.819449 kubelet[3387]: E0413 20:18:29.819427 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.819518 kubelet[3387]: W0413 20:18:29.819456 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.819518 kubelet[3387]: E0413 20:18:29.819474 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.820245 kubelet[3387]: E0413 20:18:29.819742 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.820245 kubelet[3387]: W0413 20:18:29.819767 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.820245 kubelet[3387]: E0413 20:18:29.819779 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.820245 kubelet[3387]: E0413 20:18:29.820065 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.820245 kubelet[3387]: W0413 20:18:29.820075 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.820245 kubelet[3387]: E0413 20:18:29.820101 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.820900 kubelet[3387]: E0413 20:18:29.820880 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.820900 kubelet[3387]: W0413 20:18:29.820899 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.821008 kubelet[3387]: E0413 20:18:29.820913 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.821170 kubelet[3387]: E0413 20:18:29.821155 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.823078 kubelet[3387]: W0413 20:18:29.821172 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.823078 kubelet[3387]: E0413 20:18:29.821186 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.823078 kubelet[3387]: E0413 20:18:29.823048 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.823078 kubelet[3387]: W0413 20:18:29.823061 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.823078 kubelet[3387]: E0413 20:18:29.823074 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.823512 kubelet[3387]: E0413 20:18:29.823432 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.823512 kubelet[3387]: W0413 20:18:29.823444 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.823512 kubelet[3387]: E0413 20:18:29.823457 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.824310 kubelet[3387]: E0413 20:18:29.824292 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.824310 kubelet[3387]: W0413 20:18:29.824307 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.824429 kubelet[3387]: E0413 20:18:29.824325 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.828235 kubelet[3387]: E0413 20:18:29.826468 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.828235 kubelet[3387]: W0413 20:18:29.826484 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.828235 kubelet[3387]: E0413 20:18:29.826499 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:29.828637 kubelet[3387]: E0413 20:18:29.828620 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:29.828637 kubelet[3387]: W0413 20:18:29.828636 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:29.828749 kubelet[3387]: E0413 20:18:29.828651 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.624158 kubelet[3387]: E0413 20:18:30.624101 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:30.664120 containerd[1996]: time="2026-04-13T20:18:30.664068454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:30.667062 containerd[1996]: time="2026-04-13T20:18:30.666931203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:18:30.671779 containerd[1996]: time="2026-04-13T20:18:30.670162469Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:30.674245 containerd[1996]: time="2026-04-13T20:18:30.673188511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:30.674245 containerd[1996]: time="2026-04-13T20:18:30.674047759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.589129958s" Apr 13 20:18:30.674245 containerd[1996]: time="2026-04-13T20:18:30.674092528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:18:30.681446 containerd[1996]: time="2026-04-13T20:18:30.681404224Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:18:30.705416 containerd[1996]: time="2026-04-13T20:18:30.705366185Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688\"" Apr 13 20:18:30.706477 containerd[1996]: time="2026-04-13T20:18:30.706343256Z" level=info msg="StartContainer for \"5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688\"" Apr 13 20:18:30.752448 systemd[1]: Started cri-containerd-5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688.scope - libcontainer container 5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688. Apr 13 20:18:30.768009 kubelet[3387]: I0413 20:18:30.767237 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8575cdb477-l59x7" podStartSLOduration=2.335224882 podStartE2EDuration="5.767190739s" podCreationTimestamp="2026-04-13 20:18:25 +0000 UTC" firstStartedPulling="2026-04-13 20:18:25.648627566 +0000 UTC m=+20.218035266" lastFinishedPulling="2026-04-13 20:18:29.080593425 +0000 UTC m=+23.650001123" observedRunningTime="2026-04-13 20:18:29.793645912 +0000 UTC m=+24.363053634" watchObservedRunningTime="2026-04-13 20:18:30.767190739 +0000 UTC m=+25.336598460" Apr 13 20:18:30.788072 kubelet[3387]: E0413 20:18:30.788040 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.788072 kubelet[3387]: W0413 20:18:30.788068 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.788582 kubelet[3387]: E0413 20:18:30.788352 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.788963 kubelet[3387]: E0413 20:18:30.788709 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.788963 kubelet[3387]: W0413 20:18:30.788725 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.788963 kubelet[3387]: E0413 20:18:30.788743 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.789271 kubelet[3387]: E0413 20:18:30.789251 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.789530 kubelet[3387]: W0413 20:18:30.789509 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.789603 kubelet[3387]: E0413 20:18:30.789535 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.791685 kubelet[3387]: E0413 20:18:30.791657 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.791685 kubelet[3387]: W0413 20:18:30.791684 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.791910 kubelet[3387]: E0413 20:18:30.791700 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.792130 kubelet[3387]: E0413 20:18:30.792022 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.792130 kubelet[3387]: W0413 20:18:30.792036 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.792130 kubelet[3387]: E0413 20:18:30.792050 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.792450 kubelet[3387]: E0413 20:18:30.792359 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.792450 kubelet[3387]: W0413 20:18:30.792372 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.792450 kubelet[3387]: E0413 20:18:30.792401 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.792828 kubelet[3387]: E0413 20:18:30.792656 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.792828 kubelet[3387]: W0413 20:18:30.792669 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.792828 kubelet[3387]: E0413 20:18:30.792681 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.792997 kubelet[3387]: E0413 20:18:30.792983 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.792997 kubelet[3387]: W0413 20:18:30.792994 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.793078 kubelet[3387]: E0413 20:18:30.793008 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.793419 kubelet[3387]: E0413 20:18:30.793258 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.793419 kubelet[3387]: W0413 20:18:30.793270 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.793419 kubelet[3387]: E0413 20:18:30.793283 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.793511 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795390 kubelet[3387]: W0413 20:18:30.793521 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.793533 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.793940 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795390 kubelet[3387]: W0413 20:18:30.793952 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.793965 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.794197 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795390 kubelet[3387]: W0413 20:18:30.794206 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.794235 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795390 kubelet[3387]: E0413 20:18:30.794446 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795821 kubelet[3387]: W0413 20:18:30.794454 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795821 kubelet[3387]: E0413 20:18:30.794465 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795821 kubelet[3387]: E0413 20:18:30.794682 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795821 kubelet[3387]: W0413 20:18:30.794691 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795821 kubelet[3387]: E0413 20:18:30.794703 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.795821 kubelet[3387]: E0413 20:18:30.794905 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.795821 kubelet[3387]: W0413 20:18:30.794914 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.795821 kubelet[3387]: E0413 20:18:30.794925 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.823602 kubelet[3387]: E0413 20:18:30.823570 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.823602 kubelet[3387]: W0413 20:18:30.823594 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.823836 kubelet[3387]: E0413 20:18:30.823618 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.823957 kubelet[3387]: E0413 20:18:30.823939 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.823957 kubelet[3387]: W0413 20:18:30.823954 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.824078 kubelet[3387]: E0413 20:18:30.823970 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.824378 kubelet[3387]: E0413 20:18:30.824358 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.824378 kubelet[3387]: W0413 20:18:30.824372 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.824508 kubelet[3387]: E0413 20:18:30.824387 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.825614 kubelet[3387]: E0413 20:18:30.825442 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.825614 kubelet[3387]: W0413 20:18:30.825458 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.825614 kubelet[3387]: E0413 20:18:30.825475 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.826055 kubelet[3387]: E0413 20:18:30.825743 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.826055 kubelet[3387]: W0413 20:18:30.825755 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.826055 kubelet[3387]: E0413 20:18:30.825823 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.826297 kubelet[3387]: E0413 20:18:30.826080 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.826297 kubelet[3387]: W0413 20:18:30.826090 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.826297 kubelet[3387]: E0413 20:18:30.826103 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.827050 kubelet[3387]: E0413 20:18:30.827032 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.827050 kubelet[3387]: W0413 20:18:30.827049 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.827205 kubelet[3387]: E0413 20:18:30.827064 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.827633 kubelet[3387]: E0413 20:18:30.827614 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.827633 kubelet[3387]: W0413 20:18:30.827631 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.827753 kubelet[3387]: E0413 20:18:30.827646 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.828503 kubelet[3387]: E0413 20:18:30.828481 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.828503 kubelet[3387]: W0413 20:18:30.828496 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.828775 kubelet[3387]: E0413 20:18:30.828510 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.829141 kubelet[3387]: E0413 20:18:30.828878 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.829141 kubelet[3387]: W0413 20:18:30.828889 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.829141 kubelet[3387]: E0413 20:18:30.828906 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.829599 kubelet[3387]: E0413 20:18:30.829543 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.829599 kubelet[3387]: W0413 20:18:30.829558 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.829599 kubelet[3387]: E0413 20:18:30.829572 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.831134 kubelet[3387]: E0413 20:18:30.830952 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.831134 kubelet[3387]: W0413 20:18:30.830968 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.831134 kubelet[3387]: E0413 20:18:30.830984 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.831592 kubelet[3387]: E0413 20:18:30.831324 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.831592 kubelet[3387]: W0413 20:18:30.831346 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.831592 kubelet[3387]: E0413 20:18:30.831361 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.832717 kubelet[3387]: E0413 20:18:30.832655 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.832717 kubelet[3387]: W0413 20:18:30.832672 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.832717 kubelet[3387]: E0413 20:18:30.832686 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.833027 kubelet[3387]: E0413 20:18:30.833008 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.833027 kubelet[3387]: W0413 20:18:30.833026 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.833136 kubelet[3387]: E0413 20:18:30.833039 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.833335 kubelet[3387]: E0413 20:18:30.833320 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.833401 kubelet[3387]: W0413 20:18:30.833336 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.833401 kubelet[3387]: E0413 20:18:30.833348 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.833658 kubelet[3387]: E0413 20:18:30.833633 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.833658 kubelet[3387]: W0413 20:18:30.833651 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.833658 kubelet[3387]: E0413 20:18:30.833672 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.835287 kubelet[3387]: E0413 20:18:30.835269 3387 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:18:30.835287 kubelet[3387]: W0413 20:18:30.835286 3387 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:18:30.835405 kubelet[3387]: E0413 20:18:30.835301 3387 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:18:30.841445 containerd[1996]: time="2026-04-13T20:18:30.841385370Z" level=info msg="StartContainer for \"5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688\" returns successfully" Apr 13 20:18:30.855922 systemd[1]: cri-containerd-5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688.scope: Deactivated successfully. Apr 13 20:18:30.953394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688-rootfs.mount: Deactivated successfully. Apr 13 20:18:31.350722 containerd[1996]: time="2026-04-13T20:18:31.339185967Z" level=info msg="shim disconnected" id=5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688 namespace=k8s.io Apr 13 20:18:31.351038 containerd[1996]: time="2026-04-13T20:18:31.350720983Z" level=warning msg="cleaning up after shim disconnected" id=5d27706b4b11a31a7d0f408a7e3900ed515506f31c376e1180a376929d8d2688 namespace=k8s.io Apr 13 20:18:31.351038 containerd[1996]: time="2026-04-13T20:18:31.350741515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:18:31.752961 containerd[1996]: time="2026-04-13T20:18:31.752464175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:18:32.623431 kubelet[3387]: E0413 20:18:32.623362 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:34.623954 kubelet[3387]: E0413 20:18:34.623890 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:36.624236 kubelet[3387]: E0413 20:18:36.623581 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:38.623524 kubelet[3387]: E0413 20:18:38.623460 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:40.623699 kubelet[3387]: E0413 20:18:40.623638 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:42.623889 kubelet[3387]: E0413 20:18:42.623771 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:44.142815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804496010.mount: Deactivated successfully. Apr 13 20:18:44.210765 containerd[1996]: time="2026-04-13T20:18:44.206388642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:18:44.210765 containerd[1996]: time="2026-04-13T20:18:44.203862352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:44.214737 containerd[1996]: time="2026-04-13T20:18:44.214684897Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:44.217361 containerd[1996]: time="2026-04-13T20:18:44.217292890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:44.218397 containerd[1996]: time="2026-04-13T20:18:44.218362050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.465834055s" Apr 13 20:18:44.218523 containerd[1996]: time="2026-04-13T20:18:44.218400402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:18:44.228442 containerd[1996]: time="2026-04-13T20:18:44.228356198Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:18:44.255056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423961579.mount: Deactivated successfully. Apr 13 20:18:44.259390 containerd[1996]: time="2026-04-13T20:18:44.259348716Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653\"" Apr 13 20:18:44.261270 containerd[1996]: time="2026-04-13T20:18:44.260314490Z" level=info msg="StartContainer for \"f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653\"" Apr 13 20:18:44.311534 systemd[1]: Started cri-containerd-f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653.scope - libcontainer container f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653. Apr 13 20:18:44.351599 containerd[1996]: time="2026-04-13T20:18:44.351550997Z" level=info msg="StartContainer for \"f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653\" returns successfully" Apr 13 20:18:44.404451 systemd[1]: cri-containerd-f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653.scope: Deactivated successfully. Apr 13 20:18:44.480840 containerd[1996]: time="2026-04-13T20:18:44.480767570Z" level=info msg="shim disconnected" id=f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653 namespace=k8s.io Apr 13 20:18:44.480840 containerd[1996]: time="2026-04-13T20:18:44.480832699Z" level=warning msg="cleaning up after shim disconnected" id=f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653 namespace=k8s.io Apr 13 20:18:44.480840 containerd[1996]: time="2026-04-13T20:18:44.480844230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:18:44.623554 kubelet[3387]: E0413 20:18:44.623490 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:44.797679 containerd[1996]: time="2026-04-13T20:18:44.796999055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:18:45.140463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f39e0de8076526a8a340cdd9bc2e8ad33a20239bf46aff9640aede0d1b463653-rootfs.mount: Deactivated successfully. Apr 13 20:18:46.624198 kubelet[3387]: E0413 20:18:46.624019 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:48.481083 containerd[1996]: time="2026-04-13T20:18:48.481032381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:48.482535 containerd[1996]: time="2026-04-13T20:18:48.482365409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:18:48.484091 containerd[1996]: time="2026-04-13T20:18:48.483741673Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:48.486685 containerd[1996]: time="2026-04-13T20:18:48.486645737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:48.487578 containerd[1996]: time="2026-04-13T20:18:48.487542146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.690498865s" Apr 13 20:18:48.487730 containerd[1996]: time="2026-04-13T20:18:48.487706619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:18:48.494048 containerd[1996]: time="2026-04-13T20:18:48.494008996Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:18:48.517992 containerd[1996]: time="2026-04-13T20:18:48.517934996Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea\"" Apr 13 20:18:48.520255 containerd[1996]: time="2026-04-13T20:18:48.518574282Z" level=info msg="StartContainer for \"bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea\"" Apr 13 20:18:48.560152 systemd[1]: Started cri-containerd-bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea.scope - libcontainer container bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea. Apr 13 20:18:48.596978 containerd[1996]: time="2026-04-13T20:18:48.596916372Z" level=info msg="StartContainer for \"bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea\" returns successfully" Apr 13 20:18:48.625144 kubelet[3387]: E0413 20:18:48.625084 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:49.611465 systemd[1]: cri-containerd-bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea.scope: Deactivated successfully. Apr 13 20:18:49.660646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea-rootfs.mount: Deactivated successfully. Apr 13 20:18:49.676998 containerd[1996]: time="2026-04-13T20:18:49.675350577Z" level=info msg="shim disconnected" id=bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea namespace=k8s.io Apr 13 20:18:49.676998 containerd[1996]: time="2026-04-13T20:18:49.675425655Z" level=warning msg="cleaning up after shim disconnected" id=bf31f040d0a7ac6ec3b71c2d354fc69c065df1b43d6768eda1f9080b4c66a7ea namespace=k8s.io Apr 13 20:18:49.676998 containerd[1996]: time="2026-04-13T20:18:49.675438973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:18:49.693243 kubelet[3387]: I0413 20:18:49.679735 3387 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 20:18:49.936912 containerd[1996]: time="2026-04-13T20:18:49.936644153Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:18:49.966611 containerd[1996]: time="2026-04-13T20:18:49.966524075Z" level=info msg="CreateContainer within sandbox \"6b13653b4c5fb53f1a671329071f64b831e2d97df931d3c72e1f2ecfa55b48cc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34\"" Apr 13 20:18:49.987002 containerd[1996]: time="2026-04-13T20:18:49.986868036Z" level=info msg="StartContainer for \"b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34\"" Apr 13 20:18:50.012602 kubelet[3387]: I0413 20:18:50.012552 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252wf\" (UniqueName: \"kubernetes.io/projected/5a72eb83-1ce6-44af-8741-4e35c3bb2264-kube-api-access-252wf\") pod \"calico-apiserver-89cb875f9-5djhb\" (UID: \"5a72eb83-1ce6-44af-8741-4e35c3bb2264\") " pod="calico-system/calico-apiserver-89cb875f9-5djhb" Apr 13 20:18:50.012765 kubelet[3387]: I0413 20:18:50.012622 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brh24\" (UniqueName: \"kubernetes.io/projected/be3b33e8-1040-48c5-abca-a4bbcd068298-kube-api-access-brh24\") pod \"whisker-5874db6db9-9vplk\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:50.012765 kubelet[3387]: I0413 20:18:50.012647 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq48j\" (UniqueName: \"kubernetes.io/projected/3e807f6a-3dfa-4c8a-9873-de1da77007ef-kube-api-access-sq48j\") pod \"coredns-66bc5c9577-ktw4w\" (UID: \"3e807f6a-3dfa-4c8a-9873-de1da77007ef\") " pod="kube-system/coredns-66bc5c9577-ktw4w" Apr 13 20:18:50.012765 kubelet[3387]: I0413 20:18:50.012675 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-nginx-config\") pod \"whisker-5874db6db9-9vplk\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:50.012765 kubelet[3387]: I0413 20:18:50.012702 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e807f6a-3dfa-4c8a-9873-de1da77007ef-config-volume\") pod \"coredns-66bc5c9577-ktw4w\" (UID: \"3e807f6a-3dfa-4c8a-9873-de1da77007ef\") " pod="kube-system/coredns-66bc5c9577-ktw4w" Apr 13 20:18:50.012765 kubelet[3387]: I0413 20:18:50.012724 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t29v\" (UniqueName: \"kubernetes.io/projected/d935e543-6716-4190-a20a-b6043d73a3aa-kube-api-access-7t29v\") pod \"calico-kube-controllers-557fbc7964-q7nls\" (UID: \"d935e543-6716-4190-a20a-b6043d73a3aa\") " pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" Apr 13 20:18:50.013003 kubelet[3387]: I0413 20:18:50.012759 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkxj\" (UniqueName: \"kubernetes.io/projected/b00fbc7c-661e-42f4-86eb-d3bcca719bc6-kube-api-access-fmkxj\") pod \"coredns-66bc5c9577-9jf8f\" (UID: \"b00fbc7c-661e-42f4-86eb-d3bcca719bc6\") " pod="kube-system/coredns-66bc5c9577-9jf8f" Apr 13 20:18:50.013003 kubelet[3387]: I0413 20:18:50.012790 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-backend-key-pair\") pod \"whisker-5874db6db9-9vplk\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:50.013003 kubelet[3387]: I0413 20:18:50.012819 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a72eb83-1ce6-44af-8741-4e35c3bb2264-calico-apiserver-certs\") pod \"calico-apiserver-89cb875f9-5djhb\" (UID: \"5a72eb83-1ce6-44af-8741-4e35c3bb2264\") " pod="calico-system/calico-apiserver-89cb875f9-5djhb" Apr 13 20:18:50.013003 kubelet[3387]: I0413 20:18:50.012842 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/693a1ffc-7980-4490-bc2b-aca384d54013-calico-apiserver-certs\") pod \"calico-apiserver-89cb875f9-6xnb6\" (UID: \"693a1ffc-7980-4490-bc2b-aca384d54013\") " pod="calico-system/calico-apiserver-89cb875f9-6xnb6" Apr 13 20:18:50.013003 kubelet[3387]: I0413 20:18:50.012869 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvcc\" (UniqueName: \"kubernetes.io/projected/693a1ffc-7980-4490-bc2b-aca384d54013-kube-api-access-rpvcc\") pod \"calico-apiserver-89cb875f9-6xnb6\" (UID: \"693a1ffc-7980-4490-bc2b-aca384d54013\") " pod="calico-system/calico-apiserver-89cb875f9-6xnb6" Apr 13 20:18:50.013264 kubelet[3387]: I0413 20:18:50.012898 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-ca-bundle\") pod \"whisker-5874db6db9-9vplk\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:50.013264 kubelet[3387]: I0413 20:18:50.012948 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00fbc7c-661e-42f4-86eb-d3bcca719bc6-config-volume\") pod \"coredns-66bc5c9577-9jf8f\" (UID: \"b00fbc7c-661e-42f4-86eb-d3bcca719bc6\") " pod="kube-system/coredns-66bc5c9577-9jf8f" Apr 13 20:18:50.013264 kubelet[3387]: I0413 20:18:50.012980 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d935e543-6716-4190-a20a-b6043d73a3aa-tigera-ca-bundle\") pod \"calico-kube-controllers-557fbc7964-q7nls\" (UID: \"d935e543-6716-4190-a20a-b6043d73a3aa\") " pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" Apr 13 20:18:50.061464 systemd[1]: Started cri-containerd-b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34.scope - libcontainer container b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34. Apr 13 20:18:50.072797 systemd[1]: Created slice kubepods-burstable-pod3e807f6a_3dfa_4c8a_9873_de1da77007ef.slice - libcontainer container kubepods-burstable-pod3e807f6a_3dfa_4c8a_9873_de1da77007ef.slice. Apr 13 20:18:50.081199 systemd[1]: Created slice kubepods-burstable-podb00fbc7c_661e_42f4_86eb_d3bcca719bc6.slice - libcontainer container kubepods-burstable-podb00fbc7c_661e_42f4_86eb_d3bcca719bc6.slice. Apr 13 20:18:50.091907 systemd[1]: Created slice kubepods-besteffort-podbe3b33e8_1040_48c5_abca_a4bbcd068298.slice - libcontainer container kubepods-besteffort-podbe3b33e8_1040_48c5_abca_a4bbcd068298.slice. Apr 13 20:18:50.100703 systemd[1]: Created slice kubepods-besteffort-pod5a72eb83_1ce6_44af_8741_4e35c3bb2264.slice - libcontainer container kubepods-besteffort-pod5a72eb83_1ce6_44af_8741_4e35c3bb2264.slice. Apr 13 20:18:50.112969 systemd[1]: Created slice kubepods-besteffort-pod693a1ffc_7980_4490_bc2b_aca384d54013.slice - libcontainer container kubepods-besteffort-pod693a1ffc_7980_4490_bc2b_aca384d54013.slice. Apr 13 20:18:50.113986 kubelet[3387]: I0413 20:18:50.113945 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3a8882f-9c17-49bf-8330-442e2e29fe2d-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-ngwcv\" (UID: \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\") " pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:50.114154 kubelet[3387]: I0413 20:18:50.114003 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a8882f-9c17-49bf-8330-442e2e29fe2d-config\") pod \"goldmane-cccfbd5cf-ngwcv\" (UID: \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\") " pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:50.114154 kubelet[3387]: I0413 20:18:50.114056 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/b3a8882f-9c17-49bf-8330-442e2e29fe2d-kube-api-access-rsk82\") pod \"goldmane-cccfbd5cf-ngwcv\" (UID: \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\") " pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:50.114154 kubelet[3387]: I0413 20:18:50.114100 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b3a8882f-9c17-49bf-8330-442e2e29fe2d-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-ngwcv\" (UID: \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\") " pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:50.167127 systemd[1]: Created slice kubepods-besteffort-podd935e543_6716_4190_a20a_b6043d73a3aa.slice - libcontainer container kubepods-besteffort-podd935e543_6716_4190_a20a_b6043d73a3aa.slice. Apr 13 20:18:50.183246 containerd[1996]: time="2026-04-13T20:18:50.179795320Z" level=info msg="StartContainer for \"b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34\" returns successfully" Apr 13 20:18:50.222726 systemd[1]: Created slice kubepods-besteffort-podb3a8882f_9c17_49bf_8330_442e2e29fe2d.slice - libcontainer container kubepods-besteffort-podb3a8882f_9c17_49bf_8330_442e2e29fe2d.slice. Apr 13 20:18:50.408092 containerd[1996]: time="2026-04-13T20:18:50.408042315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5874db6db9-9vplk,Uid:be3b33e8-1040-48c5-abca-a4bbcd068298,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.409192 containerd[1996]: time="2026-04-13T20:18:50.409111659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ktw4w,Uid:3e807f6a-3dfa-4c8a-9873-de1da77007ef,Namespace:kube-system,Attempt:0,}" Apr 13 20:18:50.430019 containerd[1996]: time="2026-04-13T20:18:50.429307471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-5djhb,Uid:5a72eb83-1ce6-44af-8741-4e35c3bb2264,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.431803 containerd[1996]: time="2026-04-13T20:18:50.431735755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9jf8f,Uid:b00fbc7c-661e-42f4-86eb-d3bcca719bc6,Namespace:kube-system,Attempt:0,}" Apr 13 20:18:50.449857 containerd[1996]: time="2026-04-13T20:18:50.449801564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-6xnb6,Uid:693a1ffc-7980-4490-bc2b-aca384d54013,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.501544 containerd[1996]: time="2026-04-13T20:18:50.500872243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557fbc7964-q7nls,Uid:d935e543-6716-4190-a20a-b6043d73a3aa,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.537239 containerd[1996]: time="2026-04-13T20:18:50.535723197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngwcv,Uid:b3a8882f-9c17-49bf-8330-442e2e29fe2d,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.636546 systemd[1]: Created slice kubepods-besteffort-podf5c81d3c_90a0_440b_96eb_db49837fb4b5.slice - libcontainer container kubepods-besteffort-podf5c81d3c_90a0_440b_96eb_db49837fb4b5.slice. Apr 13 20:18:50.655653 containerd[1996]: time="2026-04-13T20:18:50.654565441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdjzz,Uid:f5c81d3c-90a0-440b-96eb-db49837fb4b5,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:50.905087 kubelet[3387]: I0413 20:18:50.904726 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8w2wd" podStartSLOduration=3.059388686 podStartE2EDuration="25.879521693s" podCreationTimestamp="2026-04-13 20:18:25 +0000 UTC" firstStartedPulling="2026-04-13 20:18:25.668464064 +0000 UTC m=+20.237871762" lastFinishedPulling="2026-04-13 20:18:48.488597065 +0000 UTC m=+43.058004769" observedRunningTime="2026-04-13 20:18:50.879106758 +0000 UTC m=+45.448514480" watchObservedRunningTime="2026-04-13 20:18:50.879521693 +0000 UTC m=+45.448929414" Apr 13 20:18:51.335526 containerd[1996]: time="2026-04-13T20:18:51.335470373Z" level=error msg="Failed to destroy network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.349511 containerd[1996]: time="2026-04-13T20:18:51.349404887Z" level=error msg="encountered an error cleaning up failed sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.350423 containerd[1996]: time="2026-04-13T20:18:51.349903377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-5djhb,Uid:5a72eb83-1ce6-44af-8741-4e35c3bb2264,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.368699 kubelet[3387]: E0413 20:18:51.368642 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.373959 kubelet[3387]: E0413 20:18:51.370291 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-89cb875f9-5djhb" Apr 13 20:18:51.373959 kubelet[3387]: E0413 20:18:51.373936 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-89cb875f9-5djhb" Apr 13 20:18:51.375422 kubelet[3387]: E0413 20:18:51.374017 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89cb875f9-5djhb_calico-system(5a72eb83-1ce6-44af-8741-4e35c3bb2264)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89cb875f9-5djhb_calico-system(5a72eb83-1ce6-44af-8741-4e35c3bb2264)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-89cb875f9-5djhb" podUID="5a72eb83-1ce6-44af-8741-4e35c3bb2264" Apr 13 20:18:51.383116 containerd[1996]: time="2026-04-13T20:18:51.383070171Z" level=error msg="Failed to destroy network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.386386 containerd[1996]: time="2026-04-13T20:18:51.386054469Z" level=error msg="encountered an error cleaning up failed sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.386820 containerd[1996]: time="2026-04-13T20:18:51.386653733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5874db6db9-9vplk,Uid:be3b33e8-1040-48c5-abca-a4bbcd068298,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.389229 kubelet[3387]: E0413 20:18:51.388902 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.389229 kubelet[3387]: E0413 20:18:51.389029 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:51.389229 kubelet[3387]: E0413 20:18:51.389060 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5874db6db9-9vplk" Apr 13 20:18:51.389826 kubelet[3387]: E0413 20:18:51.389145 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5874db6db9-9vplk_calico-system(be3b33e8-1040-48c5-abca-a4bbcd068298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5874db6db9-9vplk_calico-system(be3b33e8-1040-48c5-abca-a4bbcd068298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5874db6db9-9vplk" podUID="be3b33e8-1040-48c5-abca-a4bbcd068298" Apr 13 20:18:51.398562 containerd[1996]: time="2026-04-13T20:18:51.398511839Z" level=error msg="Failed to destroy network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.399100 containerd[1996]: time="2026-04-13T20:18:51.399064276Z" level=error msg="encountered an error cleaning up failed sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.399815 containerd[1996]: time="2026-04-13T20:18:51.399267625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngwcv,Uid:b3a8882f-9c17-49bf-8330-442e2e29fe2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.399956 kubelet[3387]: E0413 20:18:51.399496 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.399956 kubelet[3387]: E0413 20:18:51.399552 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:51.399956 kubelet[3387]: E0413 20:18:51.399577 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ngwcv" Apr 13 20:18:51.401026 kubelet[3387]: E0413 20:18:51.399644 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-ngwcv_calico-system(b3a8882f-9c17-49bf-8330-442e2e29fe2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-ngwcv_calico-system(b3a8882f-9c17-49bf-8330-442e2e29fe2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-ngwcv" podUID="b3a8882f-9c17-49bf-8330-442e2e29fe2d" Apr 13 20:18:51.401584 containerd[1996]: time="2026-04-13T20:18:51.401303610Z" level=error msg="Failed to destroy network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.402412 containerd[1996]: time="2026-04-13T20:18:51.402107793Z" level=error msg="Failed to destroy network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.403179 containerd[1996]: time="2026-04-13T20:18:51.402919851Z" level=error msg="encountered an error cleaning up failed sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.403568 containerd[1996]: time="2026-04-13T20:18:51.403496279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ktw4w,Uid:3e807f6a-3dfa-4c8a-9873-de1da77007ef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.404232 containerd[1996]: time="2026-04-13T20:18:51.403828184Z" level=error msg="encountered an error cleaning up failed sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.404232 containerd[1996]: time="2026-04-13T20:18:51.403884543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557fbc7964-q7nls,Uid:d935e543-6716-4190-a20a-b6043d73a3aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.404387 kubelet[3387]: E0413 20:18:51.404095 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.404387 kubelet[3387]: E0413 20:18:51.404153 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" Apr 13 20:18:51.404387 kubelet[3387]: E0413 20:18:51.404178 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" Apr 13 20:18:51.406504 kubelet[3387]: E0413 20:18:51.405946 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-557fbc7964-q7nls_calico-system(d935e543-6716-4190-a20a-b6043d73a3aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-557fbc7964-q7nls_calico-system(d935e543-6716-4190-a20a-b6043d73a3aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" podUID="d935e543-6716-4190-a20a-b6043d73a3aa" Apr 13 20:18:51.407601 kubelet[3387]: E0413 20:18:51.407559 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.407997 kubelet[3387]: E0413 20:18:51.407971 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ktw4w" Apr 13 20:18:51.408131 kubelet[3387]: E0413 20:18:51.408112 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ktw4w" Apr 13 20:18:51.408483 kubelet[3387]: E0413 20:18:51.408313 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ktw4w_kube-system(3e807f6a-3dfa-4c8a-9873-de1da77007ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ktw4w_kube-system(3e807f6a-3dfa-4c8a-9873-de1da77007ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ktw4w" podUID="3e807f6a-3dfa-4c8a-9873-de1da77007ef" Apr 13 20:18:51.415013 containerd[1996]: time="2026-04-13T20:18:51.414796619Z" level=error msg="Failed to destroy network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.415424 containerd[1996]: time="2026-04-13T20:18:51.415294026Z" level=error msg="Failed to destroy network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.416519 containerd[1996]: time="2026-04-13T20:18:51.415968670Z" level=error msg="encountered an error cleaning up failed sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.416519 containerd[1996]: time="2026-04-13T20:18:51.416065676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9jf8f,Uid:b00fbc7c-661e-42f4-86eb-d3bcca719bc6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.416969 kubelet[3387]: E0413 20:18:51.416460 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.416969 kubelet[3387]: E0413 20:18:51.416628 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9jf8f" Apr 13 20:18:51.416969 kubelet[3387]: E0413 20:18:51.416656 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9jf8f" Apr 13 20:18:51.417550 kubelet[3387]: E0413 20:18:51.417035 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9jf8f_kube-system(b00fbc7c-661e-42f4-86eb-d3bcca719bc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9jf8f_kube-system(b00fbc7c-661e-42f4-86eb-d3bcca719bc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9jf8f" podUID="b00fbc7c-661e-42f4-86eb-d3bcca719bc6" Apr 13 20:18:51.417883 containerd[1996]: time="2026-04-13T20:18:51.417692126Z" level=error msg="encountered an error cleaning up failed sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.417883 containerd[1996]: time="2026-04-13T20:18:51.417752515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdjzz,Uid:f5c81d3c-90a0-440b-96eb-db49837fb4b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.418204 kubelet[3387]: E0413 20:18:51.418003 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.418204 kubelet[3387]: E0413 20:18:51.418051 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:51.418204 kubelet[3387]: E0413 20:18:51.418074 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdjzz" Apr 13 20:18:51.418526 kubelet[3387]: E0413 20:18:51.418129 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cdjzz_calico-system(f5c81d3c-90a0-440b-96eb-db49837fb4b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cdjzz_calico-system(f5c81d3c-90a0-440b-96eb-db49837fb4b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:51.422032 containerd[1996]: time="2026-04-13T20:18:51.421986145Z" level=error msg="Failed to destroy network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.422420 containerd[1996]: time="2026-04-13T20:18:51.422385468Z" level=error msg="encountered an error cleaning up failed sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.422508 containerd[1996]: time="2026-04-13T20:18:51.422442630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-6xnb6,Uid:693a1ffc-7980-4490-bc2b-aca384d54013,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.422734 kubelet[3387]: E0413 20:18:51.422701 3387 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.422807 kubelet[3387]: E0413 20:18:51.422752 3387 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-89cb875f9-6xnb6" Apr 13 20:18:51.422807 kubelet[3387]: E0413 20:18:51.422777 3387 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-89cb875f9-6xnb6" Apr 13 20:18:51.422905 kubelet[3387]: E0413 20:18:51.422857 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-89cb875f9-6xnb6_calico-system(693a1ffc-7980-4490-bc2b-aca384d54013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-89cb875f9-6xnb6_calico-system(693a1ffc-7980-4490-bc2b-aca384d54013)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-89cb875f9-6xnb6" podUID="693a1ffc-7980-4490-bc2b-aca384d54013" Apr 13 20:18:51.658017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327-shm.mount: Deactivated successfully. Apr 13 20:18:51.658683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15-shm.mount: Deactivated successfully. Apr 13 20:18:51.658886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996-shm.mount: Deactivated successfully. Apr 13 20:18:51.659081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f-shm.mount: Deactivated successfully. Apr 13 20:18:51.659275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c-shm.mount: Deactivated successfully. Apr 13 20:18:51.659476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae-shm.mount: Deactivated successfully. Apr 13 20:18:51.659566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5-shm.mount: Deactivated successfully. Apr 13 20:18:51.659652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a-shm.mount: Deactivated successfully. Apr 13 20:18:51.838995 containerd[1996]: time="2026-04-13T20:18:51.838952971Z" level=info msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" Apr 13 20:18:51.843622 containerd[1996]: time="2026-04-13T20:18:51.843227526Z" level=info msg="Ensure that sandbox 504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15 in task-service has been cleanup successfully" Apr 13 20:18:51.845470 kubelet[3387]: I0413 20:18:51.845423 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:18:51.852058 kubelet[3387]: I0413 20:18:51.851376 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:18:51.854040 containerd[1996]: time="2026-04-13T20:18:51.853996532Z" level=info msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" Apr 13 20:18:51.855753 containerd[1996]: time="2026-04-13T20:18:51.855713968Z" level=info msg="Ensure that sandbox 388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5 in task-service has been cleanup successfully" Apr 13 20:18:51.926707 containerd[1996]: time="2026-04-13T20:18:51.926480109Z" level=error msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" failed" error="failed to destroy network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.939095 kubelet[3387]: E0413 20:18:51.938961 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:18:51.952912 containerd[1996]: time="2026-04-13T20:18:51.952861148Z" level=error msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" failed" error="failed to destroy network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:51.962400 kubelet[3387]: I0413 20:18:51.962195 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:18:51.962400 kubelet[3387]: I0413 20:18:51.962275 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:18:51.962400 kubelet[3387]: I0413 20:18:51.962304 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:18:51.962400 kubelet[3387]: I0413 20:18:51.962320 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:18:51.962400 kubelet[3387]: I0413 20:18:51.962348 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:18:51.963090 kubelet[3387]: E0413 20:18:51.963012 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:18:51.991526 kubelet[3387]: E0413 20:18:51.939032 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5"} Apr 13 20:18:51.993331 kubelet[3387]: E0413 20:18:51.991714 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d935e543-6716-4190-a20a-b6043d73a3aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:51.993331 kubelet[3387]: E0413 20:18:51.991757 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d935e543-6716-4190-a20a-b6043d73a3aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" podUID="d935e543-6716-4190-a20a-b6043d73a3aa" Apr 13 20:18:51.993331 kubelet[3387]: E0413 20:18:51.963071 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15"} Apr 13 20:18:51.993331 kubelet[3387]: E0413 20:18:51.991971 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:51.993873 containerd[1996]: time="2026-04-13T20:18:51.992446364Z" level=info msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" Apr 13 20:18:51.993873 containerd[1996]: time="2026-04-13T20:18:51.992664707Z" level=info msg="Ensure that sandbox 5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae in task-service has been cleanup successfully" Apr 13 20:18:51.993988 kubelet[3387]: E0413 20:18:51.992041 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3a8882f-9c17-49bf-8330-442e2e29fe2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-ngwcv" podUID="b3a8882f-9c17-49bf-8330-442e2e29fe2d" Apr 13 20:18:52.007253 kubelet[3387]: I0413 20:18:52.006608 3387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:52.007478 containerd[1996]: time="2026-04-13T20:18:52.007426811Z" level=info msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" Apr 13 20:18:52.007696 containerd[1996]: time="2026-04-13T20:18:52.007656278Z" level=info msg="Ensure that sandbox 162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f in task-service has been cleanup successfully" Apr 13 20:18:52.011205 containerd[1996]: time="2026-04-13T20:18:52.010648680Z" level=info msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" Apr 13 20:18:52.011205 containerd[1996]: time="2026-04-13T20:18:52.010869724Z" level=info msg="Ensure that sandbox 5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327 in task-service has been cleanup successfully" Apr 13 20:18:52.011494 containerd[1996]: time="2026-04-13T20:18:52.011466376Z" level=info msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" Apr 13 20:18:52.014160 containerd[1996]: time="2026-04-13T20:18:52.014016674Z" level=info msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" Apr 13 20:18:52.015089 containerd[1996]: time="2026-04-13T20:18:52.014604729Z" level=info msg="Ensure that sandbox 0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996 in task-service has been cleanup successfully" Apr 13 20:18:52.018010 containerd[1996]: time="2026-04-13T20:18:52.017650789Z" level=info msg="Ensure that sandbox 95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a in task-service has been cleanup successfully" Apr 13 20:18:52.021708 containerd[1996]: time="2026-04-13T20:18:52.014780367Z" level=info msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" Apr 13 20:18:52.027387 containerd[1996]: time="2026-04-13T20:18:52.027339987Z" level=info msg="Ensure that sandbox cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c in task-service has been cleanup successfully" Apr 13 20:18:52.143736 containerd[1996]: time="2026-04-13T20:18:52.143609256Z" level=error msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" failed" error="failed to destroy network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.144362 kubelet[3387]: E0413 20:18:52.143981 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:52.144362 kubelet[3387]: E0413 20:18:52.144033 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f"} Apr 13 20:18:52.144362 kubelet[3387]: E0413 20:18:52.144078 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3b33e8-1040-48c5-abca-a4bbcd068298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.144362 kubelet[3387]: E0413 20:18:52.144118 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3b33e8-1040-48c5-abca-a4bbcd068298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5874db6db9-9vplk" podUID="be3b33e8-1040-48c5-abca-a4bbcd068298" Apr 13 20:18:52.218599 containerd[1996]: time="2026-04-13T20:18:52.216601875Z" level=error msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" failed" error="failed to destroy network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.218717 kubelet[3387]: E0413 20:18:52.216895 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:18:52.218717 kubelet[3387]: E0413 20:18:52.216951 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae"} Apr 13 20:18:52.218717 kubelet[3387]: E0413 20:18:52.216997 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e807f6a-3dfa-4c8a-9873-de1da77007ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.218717 kubelet[3387]: E0413 20:18:52.217041 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e807f6a-3dfa-4c8a-9873-de1da77007ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ktw4w" podUID="3e807f6a-3dfa-4c8a-9873-de1da77007ef" Apr 13 20:18:52.242620 containerd[1996]: time="2026-04-13T20:18:52.242519438Z" level=error msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" failed" error="failed to destroy network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.244507 kubelet[3387]: E0413 20:18:52.244404 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:18:52.244507 kubelet[3387]: E0413 20:18:52.244467 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a"} Apr 13 20:18:52.244507 kubelet[3387]: E0413 20:18:52.244502 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b00fbc7c-661e-42f4-86eb-d3bcca719bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.245318 kubelet[3387]: E0413 20:18:52.244547 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b00fbc7c-661e-42f4-86eb-d3bcca719bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9jf8f" podUID="b00fbc7c-661e-42f4-86eb-d3bcca719bc6" Apr 13 20:18:52.259402 containerd[1996]: time="2026-04-13T20:18:52.258257692Z" level=error msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" failed" error="failed to destroy network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.259569 kubelet[3387]: E0413 20:18:52.258693 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:18:52.259569 kubelet[3387]: E0413 20:18:52.258754 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996"} Apr 13 20:18:52.259569 kubelet[3387]: E0413 20:18:52.258796 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a72eb83-1ce6-44af-8741-4e35c3bb2264\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.259569 kubelet[3387]: E0413 20:18:52.258833 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a72eb83-1ce6-44af-8741-4e35c3bb2264\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-89cb875f9-5djhb" podUID="5a72eb83-1ce6-44af-8741-4e35c3bb2264" Apr 13 20:18:52.260957 kubelet[3387]: E0413 20:18:52.260837 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:18:52.260957 kubelet[3387]: E0413 20:18:52.260876 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c"} Apr 13 20:18:52.260957 kubelet[3387]: E0413 20:18:52.260910 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"693a1ffc-7980-4490-bc2b-aca384d54013\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.260957 kubelet[3387]: E0413 20:18:52.260947 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"693a1ffc-7980-4490-bc2b-aca384d54013\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-89cb875f9-6xnb6" podUID="693a1ffc-7980-4490-bc2b-aca384d54013" Apr 13 20:18:52.261151 containerd[1996]: time="2026-04-13T20:18:52.260631145Z" level=error msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" failed" error="failed to destroy network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.278510 containerd[1996]: time="2026-04-13T20:18:52.278442970Z" level=error msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" failed" error="failed to destroy network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:18:52.279934 kubelet[3387]: E0413 20:18:52.279392 3387 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:18:52.279934 kubelet[3387]: E0413 20:18:52.279484 3387 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327"} Apr 13 20:18:52.279934 kubelet[3387]: E0413 20:18:52.279530 3387 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:18:52.279934 kubelet[3387]: E0413 20:18:52.279569 3387 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5c81d3c-90a0-440b-96eb-db49837fb4b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdjzz" podUID="f5c81d3c-90a0-440b-96eb-db49837fb4b5" Apr 13 20:18:52.295300 systemd[1]: run-containerd-runc-k8s.io-b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34-runc.IRBlfe.mount: Deactivated successfully. Apr 13 20:18:53.000941 containerd[1996]: time="2026-04-13T20:18:53.000520966Z" level=info msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" Apr 13 20:18:53.059614 systemd[1]: run-containerd-runc-k8s.io-b38eef3bd41fcd96f50556df1b4a701f945dc18296c4df21bf4bff6bf7594d34-runc.xeD2Wy.mount: Deactivated successfully. Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.164 [INFO][4687] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.165 [INFO][4687] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" iface="eth0" netns="/var/run/netns/cni-1417fc00-0673-6a68-6399-fb0d8ced5590" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.166 [INFO][4687] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" iface="eth0" netns="/var/run/netns/cni-1417fc00-0673-6a68-6399-fb0d8ced5590" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.173 [INFO][4687] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" iface="eth0" netns="/var/run/netns/cni-1417fc00-0673-6a68-6399-fb0d8ced5590" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.173 [INFO][4687] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.173 [INFO][4687] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.269 [INFO][4710] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.269 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.269 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.279 [WARNING][4710] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.279 [INFO][4710] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.281 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:18:53.286484 containerd[1996]: 2026-04-13 20:18:53.284 [INFO][4687] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:18:53.288509 containerd[1996]: time="2026-04-13T20:18:53.288350187Z" level=info msg="TearDown network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" successfully" Apr 13 20:18:53.288509 containerd[1996]: time="2026-04-13T20:18:53.288393152Z" level=info msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" returns successfully" Apr 13 20:18:53.290907 systemd[1]: run-netns-cni\x2d1417fc00\x2d0673\x2d6a68\x2d6399\x2dfb0d8ced5590.mount: Deactivated successfully. Apr 13 20:18:53.351923 kubelet[3387]: I0413 20:18:53.351879 3387 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-nginx-config\") pod \"be3b33e8-1040-48c5-abca-a4bbcd068298\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " Apr 13 20:18:53.351923 kubelet[3387]: I0413 20:18:53.351934 3387 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-backend-key-pair\") pod \"be3b33e8-1040-48c5-abca-a4bbcd068298\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " Apr 13 20:18:53.352607 kubelet[3387]: I0413 20:18:53.351969 3387 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brh24\" (UniqueName: \"kubernetes.io/projected/be3b33e8-1040-48c5-abca-a4bbcd068298-kube-api-access-brh24\") pod \"be3b33e8-1040-48c5-abca-a4bbcd068298\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " Apr 13 20:18:53.352607 kubelet[3387]: I0413 20:18:53.352007 3387 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-ca-bundle\") pod \"be3b33e8-1040-48c5-abca-a4bbcd068298\" (UID: \"be3b33e8-1040-48c5-abca-a4bbcd068298\") " Apr 13 20:18:53.366623 systemd[1]: var-lib-kubelet-pods-be3b33e8\x2d1040\x2d48c5\x2dabca\x2da4bbcd068298-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:18:53.368679 kubelet[3387]: I0413 20:18:53.368562 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "be3b33e8-1040-48c5-abca-a4bbcd068298" (UID: "be3b33e8-1040-48c5-abca-a4bbcd068298"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:18:53.368679 kubelet[3387]: I0413 20:18:53.361494 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "be3b33e8-1040-48c5-abca-a4bbcd068298" (UID: "be3b33e8-1040-48c5-abca-a4bbcd068298"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:18:53.368679 kubelet[3387]: I0413 20:18:53.364961 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "be3b33e8-1040-48c5-abca-a4bbcd068298" (UID: "be3b33e8-1040-48c5-abca-a4bbcd068298"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:18:53.372458 kubelet[3387]: I0413 20:18:53.372387 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3b33e8-1040-48c5-abca-a4bbcd068298-kube-api-access-brh24" (OuterVolumeSpecName: "kube-api-access-brh24") pod "be3b33e8-1040-48c5-abca-a4bbcd068298" (UID: "be3b33e8-1040-48c5-abca-a4bbcd068298"). InnerVolumeSpecName "kube-api-access-brh24". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:18:53.374559 systemd[1]: var-lib-kubelet-pods-be3b33e8\x2d1040\x2d48c5\x2dabca\x2da4bbcd068298-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbrh24.mount: Deactivated successfully. Apr 13 20:18:53.452813 kubelet[3387]: I0413 20:18:53.452758 3387 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-ca-bundle\") on node \"ip-172-31-17-102\" DevicePath \"\"" Apr 13 20:18:53.452813 kubelet[3387]: I0413 20:18:53.452801 3387 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/be3b33e8-1040-48c5-abca-a4bbcd068298-nginx-config\") on node \"ip-172-31-17-102\" DevicePath \"\"" Apr 13 20:18:53.452813 kubelet[3387]: I0413 20:18:53.452817 3387 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be3b33e8-1040-48c5-abca-a4bbcd068298-whisker-backend-key-pair\") on node \"ip-172-31-17-102\" DevicePath \"\"" Apr 13 20:18:53.453029 kubelet[3387]: I0413 20:18:53.452829 3387 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brh24\" (UniqueName: \"kubernetes.io/projected/be3b33e8-1040-48c5-abca-a4bbcd068298-kube-api-access-brh24\") on node \"ip-172-31-17-102\" DevicePath \"\"" Apr 13 20:18:53.643006 systemd[1]: Removed slice kubepods-besteffort-podbe3b33e8_1040_48c5_abca_a4bbcd068298.slice - libcontainer container kubepods-besteffort-podbe3b33e8_1040_48c5_abca_a4bbcd068298.slice. Apr 13 20:18:54.141098 systemd[1]: Created slice kubepods-besteffort-pod6c98e28f_9a57_4641_baf6_b358179d18a6.slice - libcontainer container kubepods-besteffort-pod6c98e28f_9a57_4641_baf6_b358179d18a6.slice. Apr 13 20:18:54.261554 kubelet[3387]: I0413 20:18:54.261312 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c98e28f-9a57-4641-baf6-b358179d18a6-whisker-backend-key-pair\") pod \"whisker-6646db5b4b-kxbcf\" (UID: \"6c98e28f-9a57-4641-baf6-b358179d18a6\") " pod="calico-system/whisker-6646db5b4b-kxbcf" Apr 13 20:18:54.261554 kubelet[3387]: I0413 20:18:54.261364 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c98e28f-9a57-4641-baf6-b358179d18a6-whisker-ca-bundle\") pod \"whisker-6646db5b4b-kxbcf\" (UID: \"6c98e28f-9a57-4641-baf6-b358179d18a6\") " pod="calico-system/whisker-6646db5b4b-kxbcf" Apr 13 20:18:54.261554 kubelet[3387]: I0413 20:18:54.261392 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6c98e28f-9a57-4641-baf6-b358179d18a6-nginx-config\") pod \"whisker-6646db5b4b-kxbcf\" (UID: \"6c98e28f-9a57-4641-baf6-b358179d18a6\") " pod="calico-system/whisker-6646db5b4b-kxbcf" Apr 13 20:18:54.261554 kubelet[3387]: I0413 20:18:54.261424 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkmrw\" (UniqueName: \"kubernetes.io/projected/6c98e28f-9a57-4641-baf6-b358179d18a6-kube-api-access-tkmrw\") pod \"whisker-6646db5b4b-kxbcf\" (UID: \"6c98e28f-9a57-4641-baf6-b358179d18a6\") " pod="calico-system/whisker-6646db5b4b-kxbcf" Apr 13 20:18:54.350314 kernel: calico-node[4742]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:18:54.493255 containerd[1996]: time="2026-04-13T20:18:54.492975461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6646db5b4b-kxbcf,Uid:6c98e28f-9a57-4641-baf6-b358179d18a6,Namespace:calico-system,Attempt:0,}" Apr 13 20:18:54.986204 systemd-networkd[1908]: cali1a3b1681d8b: Link UP Apr 13 20:18:54.990501 systemd-networkd[1908]: cali1a3b1681d8b: Gained carrier Apr 13 20:18:55.048814 (udev-worker)[4874]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.644 [INFO][4851] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0 whisker-6646db5b4b- calico-system 6c98e28f-9a57-4641-baf6-b358179d18a6 960 0 2026-04-13 20:18:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6646db5b4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-102 whisker-6646db5b4b-kxbcf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1a3b1681d8b [] [] }} ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.651 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.778 [INFO][4864] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" HandleID="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Workload="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.798 [INFO][4864] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" HandleID="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Workload="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038fbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"whisker-6646db5b4b-kxbcf", "timestamp":"2026-04-13 20:18:54.778257903 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002666e0)} Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.798 [INFO][4864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.799 [INFO][4864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.799 [INFO][4864] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.806 [INFO][4864] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.827 [INFO][4864] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.843 [INFO][4864] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.852 [INFO][4864] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.855 [INFO][4864] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.855 [INFO][4864] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.864 [INFO][4864] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319 Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.872 [INFO][4864] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.888 [INFO][4864] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.888 [INFO][4864] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" host="ip-172-31-17-102" Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.888 [INFO][4864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:18:55.062056 containerd[1996]: 2026-04-13 20:18:54.888 [INFO][4864] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" HandleID="k8s-pod-network.07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Workload="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:54.891 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0", GenerateName:"whisker-6646db5b4b-", Namespace:"calico-system", SelfLink:"", UID:"6c98e28f-9a57-4641-baf6-b358179d18a6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6646db5b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"whisker-6646db5b4b-kxbcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a3b1681d8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:54.891 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.65/32] ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:54.891 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a3b1681d8b ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:54.963 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:54.976 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0", GenerateName:"whisker-6646db5b4b-", Namespace:"calico-system", SelfLink:"", UID:"6c98e28f-9a57-4641-baf6-b358179d18a6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6646db5b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319", Pod:"whisker-6646db5b4b-kxbcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a3b1681d8b", MAC:"fa:a8:35:32:98:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:18:55.066818 containerd[1996]: 2026-04-13 20:18:55.037 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319" Namespace="calico-system" Pod="whisker-6646db5b4b-kxbcf" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--6646db5b4b--kxbcf-eth0" Apr 13 20:18:55.543379 (udev-worker)[4873]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:18:55.549137 systemd-networkd[1908]: vxlan.calico: Link UP Apr 13 20:18:55.549149 systemd-networkd[1908]: vxlan.calico: Gained carrier Apr 13 20:18:55.606083 containerd[1996]: time="2026-04-13T20:18:55.605968169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:18:55.607203 containerd[1996]: time="2026-04-13T20:18:55.607146594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:18:55.607731 containerd[1996]: time="2026-04-13T20:18:55.607356677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:55.608645 containerd[1996]: time="2026-04-13T20:18:55.608602732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:18:55.702359 kubelet[3387]: I0413 20:18:55.701550 3387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3b33e8-1040-48c5-abca-a4bbcd068298" path="/var/lib/kubelet/pods/be3b33e8-1040-48c5-abca-a4bbcd068298/volumes" Apr 13 20:18:55.754578 systemd[1]: Started cri-containerd-07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319.scope - libcontainer container 07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319. Apr 13 20:18:55.938052 containerd[1996]: time="2026-04-13T20:18:55.938006941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6646db5b4b-kxbcf,Uid:6c98e28f-9a57-4641-baf6-b358179d18a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319\"" Apr 13 20:18:55.970516 containerd[1996]: time="2026-04-13T20:18:55.970473744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:18:56.914497 systemd-networkd[1908]: cali1a3b1681d8b: Gained IPv6LL Apr 13 20:18:57.365378 systemd-networkd[1908]: vxlan.calico: Gained IPv6LL Apr 13 20:18:57.726670 containerd[1996]: time="2026-04-13T20:18:57.726465948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:18:57.731375 containerd[1996]: time="2026-04-13T20:18:57.731323827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.736828 containerd[1996]: time="2026-04-13T20:18:57.736782986Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.737795 containerd[1996]: time="2026-04-13T20:18:57.737735653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.739483 containerd[1996]: time="2026-04-13T20:18:57.738639638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.763051116s" Apr 13 20:18:57.739483 containerd[1996]: time="2026-04-13T20:18:57.738681996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:18:57.749191 containerd[1996]: time="2026-04-13T20:18:57.749142802Z" level=info msg="CreateContainer within sandbox \"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:18:57.771255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296493760.mount: Deactivated successfully. Apr 13 20:18:57.783409 containerd[1996]: time="2026-04-13T20:18:57.783357666Z" level=info msg="CreateContainer within sandbox \"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938\"" Apr 13 20:18:57.785484 containerd[1996]: time="2026-04-13T20:18:57.784194568Z" level=info msg="StartContainer for \"1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938\"" Apr 13 20:18:57.823611 systemd[1]: run-containerd-runc-k8s.io-1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938-runc.6uPmFi.mount: Deactivated successfully. Apr 13 20:18:57.836481 systemd[1]: Started cri-containerd-1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938.scope - libcontainer container 1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938. Apr 13 20:18:57.886629 containerd[1996]: time="2026-04-13T20:18:57.886495627Z" level=info msg="StartContainer for \"1f3ed0588059ec772f2688e109e8d5ab895bd4fde8989b4be3ac8872798cb938\" returns successfully" Apr 13 20:18:57.889002 containerd[1996]: time="2026-04-13T20:18:57.888963909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:18:59.599933 ntpd[1958]: Listen normally on 8 vxlan.calico 192.168.18.64:123 Apr 13 20:18:59.600028 ntpd[1958]: Listen normally on 9 cali1a3b1681d8b [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:18:59.602292 ntpd[1958]: 13 Apr 20:18:59 ntpd[1958]: Listen normally on 8 vxlan.calico 192.168.18.64:123 Apr 13 20:18:59.602292 ntpd[1958]: 13 Apr 20:18:59 ntpd[1958]: Listen normally on 9 cali1a3b1681d8b [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:18:59.602292 ntpd[1958]: 13 Apr 20:18:59 ntpd[1958]: Listen normally on 10 vxlan.calico [fe80::6427:b1ff:febb:f262%5]:123 Apr 13 20:18:59.600087 ntpd[1958]: Listen normally on 10 vxlan.calico [fe80::6427:b1ff:febb:f262%5]:123 Apr 13 20:19:00.147410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115401050.mount: Deactivated successfully. Apr 13 20:19:00.182090 containerd[1996]: time="2026-04-13T20:19:00.182034682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.183540 containerd[1996]: time="2026-04-13T20:19:00.183358470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:19:00.185794 containerd[1996]: time="2026-04-13T20:19:00.184941694Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.188140 containerd[1996]: time="2026-04-13T20:19:00.188035555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.189741 containerd[1996]: time="2026-04-13T20:19:00.189038861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.300029128s" Apr 13 20:19:00.189741 containerd[1996]: time="2026-04-13T20:19:00.189082539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:19:00.194244 containerd[1996]: time="2026-04-13T20:19:00.194188932Z" level=info msg="CreateContainer within sandbox \"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:19:00.222767 containerd[1996]: time="2026-04-13T20:19:00.222701055Z" level=info msg="CreateContainer within sandbox \"07a5b27203030e6e85b9a27f7246273c9408f23fcdd243d0af26f8840d472319\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fbc82249bab15eb5c8edb235caf9e18eb065f20e43e027c78d6029681fc15d16\"" Apr 13 20:19:00.223801 containerd[1996]: time="2026-04-13T20:19:00.223764767Z" level=info msg="StartContainer for \"fbc82249bab15eb5c8edb235caf9e18eb065f20e43e027c78d6029681fc15d16\"" Apr 13 20:19:00.267695 systemd[1]: Started cri-containerd-fbc82249bab15eb5c8edb235caf9e18eb065f20e43e027c78d6029681fc15d16.scope - libcontainer container fbc82249bab15eb5c8edb235caf9e18eb065f20e43e027c78d6029681fc15d16. Apr 13 20:19:00.333793 containerd[1996]: time="2026-04-13T20:19:00.333742745Z" level=info msg="StartContainer for \"fbc82249bab15eb5c8edb235caf9e18eb065f20e43e027c78d6029681fc15d16\" returns successfully" Apr 13 20:19:01.167507 kubelet[3387]: I0413 20:19:01.167436 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6646db5b4b-kxbcf" podStartSLOduration=2.9148941649999998 podStartE2EDuration="7.148839897s" podCreationTimestamp="2026-04-13 20:18:54 +0000 UTC" firstStartedPulling="2026-04-13 20:18:55.956337179 +0000 UTC m=+50.525744890" lastFinishedPulling="2026-04-13 20:19:00.190282904 +0000 UTC m=+54.759690622" observedRunningTime="2026-04-13 20:19:01.148458946 +0000 UTC m=+55.717866668" watchObservedRunningTime="2026-04-13 20:19:01.148839897 +0000 UTC m=+55.718247623" Apr 13 20:19:02.624014 containerd[1996]: time="2026-04-13T20:19:02.623594637Z" level=info msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.697 [INFO][5124] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.699 [INFO][5124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" iface="eth0" netns="/var/run/netns/cni-89321f11-2b59-31e6-8b46-38d3d1fecb79" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.700 [INFO][5124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" iface="eth0" netns="/var/run/netns/cni-89321f11-2b59-31e6-8b46-38d3d1fecb79" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.700 [INFO][5124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" iface="eth0" netns="/var/run/netns/cni-89321f11-2b59-31e6-8b46-38d3d1fecb79" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.700 [INFO][5124] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.700 [INFO][5124] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.741 [INFO][5131] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.741 [INFO][5131] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.741 [INFO][5131] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.750 [WARNING][5131] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.750 [INFO][5131] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.752 [INFO][5131] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:02.757373 containerd[1996]: 2026-04-13 20:19:02.754 [INFO][5124] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:02.758483 containerd[1996]: time="2026-04-13T20:19:02.757533101Z" level=info msg="TearDown network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" successfully" Apr 13 20:19:02.758483 containerd[1996]: time="2026-04-13T20:19:02.757580428Z" level=info msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" returns successfully" Apr 13 20:19:02.762428 systemd[1]: run-netns-cni\x2d89321f11\x2d2b59\x2d31e6\x2d8b46\x2d38d3d1fecb79.mount: Deactivated successfully. Apr 13 20:19:02.764337 containerd[1996]: time="2026-04-13T20:19:02.763789934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngwcv,Uid:b3a8882f-9c17-49bf-8330-442e2e29fe2d,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:02.918366 systemd-networkd[1908]: calic87c0994628: Link UP Apr 13 20:19:02.919883 systemd-networkd[1908]: calic87c0994628: Gained carrier Apr 13 20:19:02.925994 (udev-worker)[5157]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.829 [INFO][5138] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0 goldmane-cccfbd5cf- calico-system b3a8882f-9c17-49bf-8330-442e2e29fe2d 996 0 2026-04-13 20:18:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-102 goldmane-cccfbd5cf-ngwcv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic87c0994628 [] [] }} ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.829 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.861 [INFO][5149] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" HandleID="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.872 [INFO][5149] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" HandleID="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"goldmane-cccfbd5cf-ngwcv", "timestamp":"2026-04-13 20:19:02.861642334 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b71e0)} Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.872 [INFO][5149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.872 [INFO][5149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.872 [INFO][5149] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.876 [INFO][5149] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.884 [INFO][5149] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.889 [INFO][5149] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.891 [INFO][5149] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.894 [INFO][5149] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.894 [INFO][5149] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.896 [INFO][5149] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.903 [INFO][5149] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.911 [INFO][5149] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.66/26] block=192.168.18.64/26 handle="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.911 [INFO][5149] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.66/26] handle="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" host="ip-172-31-17-102" Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.911 [INFO][5149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:02.967593 containerd[1996]: 2026-04-13 20:19:02.911 [INFO][5149] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.66/26] IPv6=[] ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" HandleID="k8s-pod-network.8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.914 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b3a8882f-9c17-49bf-8330-442e2e29fe2d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"goldmane-cccfbd5cf-ngwcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic87c0994628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.914 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.66/32] ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.915 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic87c0994628 ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.920 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.932 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b3a8882f-9c17-49bf-8330-442e2e29fe2d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da", Pod:"goldmane-cccfbd5cf-ngwcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic87c0994628", MAC:"b2:33:0c:0a:1b:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:02.969433 containerd[1996]: 2026-04-13 20:19:02.961 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngwcv" WorkloadEndpoint="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:03.003608 containerd[1996]: time="2026-04-13T20:19:03.003435848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:03.003756 containerd[1996]: time="2026-04-13T20:19:03.003677084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:03.003937 containerd[1996]: time="2026-04-13T20:19:03.003722133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:03.004067 containerd[1996]: time="2026-04-13T20:19:03.003974920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:03.040307 systemd[1]: Started cri-containerd-8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da.scope - libcontainer container 8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da. Apr 13 20:19:03.111266 containerd[1996]: time="2026-04-13T20:19:03.111193847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngwcv,Uid:b3a8882f-9c17-49bf-8330-442e2e29fe2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da\"" Apr 13 20:19:03.113786 containerd[1996]: time="2026-04-13T20:19:03.113656273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:19:03.667488 containerd[1996]: time="2026-04-13T20:19:03.667438787Z" level=info msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.744 [INFO][5226] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.744 [INFO][5226] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" iface="eth0" netns="/var/run/netns/cni-cb3f2d37-4d57-6a3f-af09-125a47d2bb2e" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.745 [INFO][5226] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" iface="eth0" netns="/var/run/netns/cni-cb3f2d37-4d57-6a3f-af09-125a47d2bb2e" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.745 [INFO][5226] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" iface="eth0" netns="/var/run/netns/cni-cb3f2d37-4d57-6a3f-af09-125a47d2bb2e" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.745 [INFO][5226] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.745 [INFO][5226] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.794 [INFO][5233] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.794 [INFO][5233] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.794 [INFO][5233] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.801 [WARNING][5233] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.801 [INFO][5233] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.803 [INFO][5233] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:03.807116 containerd[1996]: 2026-04-13 20:19:03.804 [INFO][5226] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:03.809172 containerd[1996]: time="2026-04-13T20:19:03.807283146Z" level=info msg="TearDown network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" successfully" Apr 13 20:19:03.809172 containerd[1996]: time="2026-04-13T20:19:03.807315835Z" level=info msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" returns successfully" Apr 13 20:19:03.812791 containerd[1996]: time="2026-04-13T20:19:03.812749085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-5djhb,Uid:5a72eb83-1ce6-44af-8741-4e35c3bb2264,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:03.813968 systemd[1]: run-netns-cni\x2dcb3f2d37\x2d4d57\x2d6a3f\x2daf09\x2d125a47d2bb2e.mount: Deactivated successfully. Apr 13 20:19:03.962475 systemd-networkd[1908]: cali71ca19e027c: Link UP Apr 13 20:19:03.962725 systemd-networkd[1908]: cali71ca19e027c: Gained carrier Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.874 [INFO][5239] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0 calico-apiserver-89cb875f9- calico-system 5a72eb83-1ce6-44af-8741-4e35c3bb2264 1008 0 2026-04-13 20:18:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89cb875f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-102 calico-apiserver-89cb875f9-5djhb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali71ca19e027c [] [] }} ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.874 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.906 [INFO][5252] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" HandleID="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.914 [INFO][5252] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" HandleID="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"calico-apiserver-89cb875f9-5djhb", "timestamp":"2026-04-13 20:19:03.90688412 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002eb080)} Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.914 [INFO][5252] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.914 [INFO][5252] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.914 [INFO][5252] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.917 [INFO][5252] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.928 [INFO][5252] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.933 [INFO][5252] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.936 [INFO][5252] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.939 [INFO][5252] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.939 [INFO][5252] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.941 [INFO][5252] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.949 [INFO][5252] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.956 [INFO][5252] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.956 [INFO][5252] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" host="ip-172-31-17-102" Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.956 [INFO][5252] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:03.989478 containerd[1996]: 2026-04-13 20:19:03.956 [INFO][5252] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" HandleID="k8s-pod-network.91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.958 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"5a72eb83-1ce6-44af-8741-4e35c3bb2264", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"calico-apiserver-89cb875f9-5djhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali71ca19e027c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.959 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.67/32] ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.959 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71ca19e027c ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.962 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.964 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"5a72eb83-1ce6-44af-8741-4e35c3bb2264", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd", Pod:"calico-apiserver-89cb875f9-5djhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali71ca19e027c", MAC:"82:e9:88:9b:e3:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:03.991835 containerd[1996]: 2026-04-13 20:19:03.984 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-5djhb" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:04.038288 containerd[1996]: time="2026-04-13T20:19:04.035977306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:04.038441 containerd[1996]: time="2026-04-13T20:19:04.038330470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:04.038441 containerd[1996]: time="2026-04-13T20:19:04.038414383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:04.038729 containerd[1996]: time="2026-04-13T20:19:04.038660031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:04.083881 systemd[1]: Started cri-containerd-91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd.scope - libcontainer container 91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd. Apr 13 20:19:04.150255 containerd[1996]: time="2026-04-13T20:19:04.150105946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-5djhb,Uid:5a72eb83-1ce6-44af-8741-4e35c3bb2264,Namespace:calico-system,Attempt:1,} returns sandbox id \"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd\"" Apr 13 20:19:04.628307 containerd[1996]: time="2026-04-13T20:19:04.627432242Z" level=info msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" Apr 13 20:19:04.659437 systemd-networkd[1908]: calic87c0994628: Gained IPv6LL Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.806 [INFO][5325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.807 [INFO][5325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" iface="eth0" netns="/var/run/netns/cni-31698c70-d311-e68f-b7b2-e9e3320023fe" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.807 [INFO][5325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" iface="eth0" netns="/var/run/netns/cni-31698c70-d311-e68f-b7b2-e9e3320023fe" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.807 [INFO][5325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" iface="eth0" netns="/var/run/netns/cni-31698c70-d311-e68f-b7b2-e9e3320023fe" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.807 [INFO][5325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.808 [INFO][5325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.893 [INFO][5334] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.901 [INFO][5334] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.901 [INFO][5334] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.909 [WARNING][5334] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.909 [INFO][5334] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.911 [INFO][5334] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:04.935443 containerd[1996]: 2026-04-13 20:19:04.922 [INFO][5325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:04.939872 containerd[1996]: time="2026-04-13T20:19:04.937820610Z" level=info msg="TearDown network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" successfully" Apr 13 20:19:04.939872 containerd[1996]: time="2026-04-13T20:19:04.937856438Z" level=info msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" returns successfully" Apr 13 20:19:04.948576 systemd[1]: run-netns-cni\x2d31698c70\x2dd311\x2de68f\x2db7b2\x2de9e3320023fe.mount: Deactivated successfully. Apr 13 20:19:04.949963 containerd[1996]: time="2026-04-13T20:19:04.949419077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557fbc7964-q7nls,Uid:d935e543-6716-4190-a20a-b6043d73a3aa,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:05.043030 systemd-networkd[1908]: cali71ca19e027c: Gained IPv6LL Apr 13 20:19:05.264523 systemd-networkd[1908]: calic554bd944ec: Link UP Apr 13 20:19:05.267426 systemd-networkd[1908]: calic554bd944ec: Gained carrier Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.109 [INFO][5344] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0 calico-kube-controllers-557fbc7964- calico-system d935e543-6716-4190-a20a-b6043d73a3aa 1015 0 2026-04-13 20:18:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:557fbc7964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-102 calico-kube-controllers-557fbc7964-q7nls eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic554bd944ec [] [] }} ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.109 [INFO][5344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.172 [INFO][5361] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" HandleID="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.197 [INFO][5361] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" HandleID="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277860), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"calico-kube-controllers-557fbc7964-q7nls", "timestamp":"2026-04-13 20:19:05.172790865 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ee000)} Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.197 [INFO][5361] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.197 [INFO][5361] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.197 [INFO][5361] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.201 [INFO][5361] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.211 [INFO][5361] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.221 [INFO][5361] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.224 [INFO][5361] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.228 [INFO][5361] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.228 [INFO][5361] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.231 [INFO][5361] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9 Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.238 [INFO][5361] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.253 [INFO][5361] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.254 [INFO][5361] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" host="ip-172-31-17-102" Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.254 [INFO][5361] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:05.298921 containerd[1996]: 2026-04-13 20:19:05.254 [INFO][5361] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" HandleID="k8s-pod-network.f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.258 [INFO][5344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0", GenerateName:"calico-kube-controllers-557fbc7964-", Namespace:"calico-system", SelfLink:"", UID:"d935e543-6716-4190-a20a-b6043d73a3aa", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557fbc7964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"calico-kube-controllers-557fbc7964-q7nls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic554bd944ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.258 [INFO][5344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.68/32] ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.258 [INFO][5344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic554bd944ec ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.268 [INFO][5344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.269 [INFO][5344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0", GenerateName:"calico-kube-controllers-557fbc7964-", Namespace:"calico-system", SelfLink:"", UID:"d935e543-6716-4190-a20a-b6043d73a3aa", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557fbc7964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9", Pod:"calico-kube-controllers-557fbc7964-q7nls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic554bd944ec", MAC:"ae:9c:e1:5c:8d:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:05.299909 containerd[1996]: 2026-04-13 20:19:05.294 [INFO][5344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9" Namespace="calico-system" Pod="calico-kube-controllers-557fbc7964-q7nls" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:05.356995 containerd[1996]: time="2026-04-13T20:19:05.355887293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:05.356995 containerd[1996]: time="2026-04-13T20:19:05.355975985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:05.356995 containerd[1996]: time="2026-04-13T20:19:05.355999535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:05.356995 containerd[1996]: time="2026-04-13T20:19:05.356154662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:05.412484 systemd[1]: Started cri-containerd-f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9.scope - libcontainer container f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9. Apr 13 20:19:05.527365 containerd[1996]: time="2026-04-13T20:19:05.527266824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557fbc7964-q7nls,Uid:d935e543-6716-4190-a20a-b6043d73a3aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9\"" Apr 13 20:19:05.659427 containerd[1996]: time="2026-04-13T20:19:05.658593388Z" level=info msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" Apr 13 20:19:05.661549 containerd[1996]: time="2026-04-13T20:19:05.658249621Z" level=info msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" Apr 13 20:19:05.680713 containerd[1996]: time="2026-04-13T20:19:05.680601249Z" level=info msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.836 [WARNING][5446] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.836 [INFO][5446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.836 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" iface="eth0" netns="" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.836 [INFO][5446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.836 [INFO][5446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.960 [INFO][5475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.964 [INFO][5475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.964 [INFO][5475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.985 [WARNING][5475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.985 [INFO][5475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.988 [INFO][5475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:05.999094 containerd[1996]: 2026-04-13 20:19:05.993 [INFO][5446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:06.003766 containerd[1996]: time="2026-04-13T20:19:06.003613565Z" level=info msg="TearDown network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" successfully" Apr 13 20:19:06.003766 containerd[1996]: time="2026-04-13T20:19:06.003656941Z" level=info msg="StopPodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" returns successfully" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.819 [INFO][5453] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.819 [INFO][5453] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" iface="eth0" netns="/var/run/netns/cni-8366b576-67d0-30b8-b8f8-414db53c09ad" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.820 [INFO][5453] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" iface="eth0" netns="/var/run/netns/cni-8366b576-67d0-30b8-b8f8-414db53c09ad" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.822 [INFO][5453] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" iface="eth0" netns="/var/run/netns/cni-8366b576-67d0-30b8-b8f8-414db53c09ad" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.822 [INFO][5453] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.822 [INFO][5453] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.992 [INFO][5469] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.992 [INFO][5469] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:05.993 [INFO][5469] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:06.011 [WARNING][5469] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:06.011 [INFO][5469] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:06.014 [INFO][5469] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:06.030240 containerd[1996]: 2026-04-13 20:19:06.021 [INFO][5453] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:19:06.031786 containerd[1996]: time="2026-04-13T20:19:06.030990634Z" level=info msg="TearDown network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" successfully" Apr 13 20:19:06.031786 containerd[1996]: time="2026-04-13T20:19:06.031026052Z" level=info msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" returns successfully" Apr 13 20:19:06.039720 containerd[1996]: time="2026-04-13T20:19:06.039492337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-6xnb6,Uid:693a1ffc-7980-4490-bc2b-aca384d54013,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:06.039902 systemd[1]: run-netns-cni\x2d8366b576\x2d67d0\x2d30b8\x2db8f8\x2d414db53c09ad.mount: Deactivated successfully. Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.908 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.909 [INFO][5457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" iface="eth0" netns="/var/run/netns/cni-fb68b2ca-a768-f35b-0d96-78d76fb41ca5" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.909 [INFO][5457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" iface="eth0" netns="/var/run/netns/cni-fb68b2ca-a768-f35b-0d96-78d76fb41ca5" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.911 [INFO][5457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" iface="eth0" netns="/var/run/netns/cni-fb68b2ca-a768-f35b-0d96-78d76fb41ca5" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.911 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:05.911 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.017 [INFO][5481] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.017 [INFO][5481] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.017 [INFO][5481] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.035 [WARNING][5481] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.037 [INFO][5481] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.041 [INFO][5481] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:06.058576 containerd[1996]: 2026-04-13 20:19:06.044 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:19:06.060299 containerd[1996]: time="2026-04-13T20:19:06.059604241Z" level=info msg="TearDown network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" successfully" Apr 13 20:19:06.060299 containerd[1996]: time="2026-04-13T20:19:06.059639673Z" level=info msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" returns successfully" Apr 13 20:19:06.068548 systemd[1]: run-netns-cni\x2dfb68b2ca\x2da768\x2df35b\x2d0d96\x2d78d76fb41ca5.mount: Deactivated successfully. Apr 13 20:19:06.073361 containerd[1996]: time="2026-04-13T20:19:06.072896835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ktw4w,Uid:3e807f6a-3dfa-4c8a-9873-de1da77007ef,Namespace:kube-system,Attempt:1,}" Apr 13 20:19:06.075363 containerd[1996]: time="2026-04-13T20:19:06.075330100Z" level=info msg="RemovePodSandbox for \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" Apr 13 20:19:06.075510 containerd[1996]: time="2026-04-13T20:19:06.075496627Z" level=info msg="Forcibly stopping sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\"" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.216 [WARNING][5500] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" WorkloadEndpoint="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.216 [INFO][5500] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.216 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" iface="eth0" netns="" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.216 [INFO][5500] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.216 [INFO][5500] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.342 [INFO][5529] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.344 [INFO][5529] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.344 [INFO][5529] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.362 [WARNING][5529] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.362 [INFO][5529] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" HandleID="k8s-pod-network.162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Workload="ip--172--31--17--102-k8s-whisker--5874db6db9--9vplk-eth0" Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.364 [INFO][5529] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:06.389612 containerd[1996]: 2026-04-13 20:19:06.372 [INFO][5500] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f" Apr 13 20:19:06.391031 containerd[1996]: time="2026-04-13T20:19:06.390953784Z" level=info msg="TearDown network for sandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" successfully" Apr 13 20:19:06.453436 containerd[1996]: time="2026-04-13T20:19:06.453368978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:19:06.453849 containerd[1996]: time="2026-04-13T20:19:06.453485691Z" level=info msg="RemovePodSandbox \"162784498fbee36836bc0e23985bec793452b09635952c5c15ea0ce60cfc8c0f\" returns successfully" Apr 13 20:19:06.492599 containerd[1996]: time="2026-04-13T20:19:06.491882369Z" level=info msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" Apr 13 20:19:06.494531 systemd-networkd[1908]: cali56254d15e8f: Link UP Apr 13 20:19:06.498708 systemd-networkd[1908]: cali56254d15e8f: Gained carrier Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.238 [INFO][5501] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0 calico-apiserver-89cb875f9- calico-system 693a1ffc-7980-4490-bc2b-aca384d54013 1024 0 2026-04-13 20:18:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:89cb875f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-102 calico-apiserver-89cb875f9-6xnb6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali56254d15e8f [] [] }} ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.239 [INFO][5501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.354 [INFO][5535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" HandleID="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.387 [INFO][5535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" HandleID="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353470), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"calico-apiserver-89cb875f9-6xnb6", "timestamp":"2026-04-13 20:19:06.354779226 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.387 [INFO][5535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.387 [INFO][5535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.387 [INFO][5535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.392 [INFO][5535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.403 [INFO][5535] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.419 [INFO][5535] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.424 [INFO][5535] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.429 [INFO][5535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.429 [INFO][5535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.432 [INFO][5535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.449 [INFO][5535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.463 [INFO][5535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.69/26] block=192.168.18.64/26 handle="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.464 [INFO][5535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.69/26] handle="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" host="ip-172-31-17-102" Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.464 [INFO][5535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:06.565454 containerd[1996]: 2026-04-13 20:19:06.464 [INFO][5535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.69/26] IPv6=[] ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" HandleID="k8s-pod-network.3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.474 [INFO][5501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"693a1ffc-7980-4490-bc2b-aca384d54013", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"calico-apiserver-89cb875f9-6xnb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56254d15e8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.475 [INFO][5501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.69/32] ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.475 [INFO][5501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56254d15e8f ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.502 [INFO][5501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.510 [INFO][5501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"693a1ffc-7980-4490-bc2b-aca384d54013", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f", Pod:"calico-apiserver-89cb875f9-6xnb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56254d15e8f", MAC:"4e:98:e9:86:cb:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:06.566454 containerd[1996]: 2026-04-13 20:19:06.551 [INFO][5501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f" Namespace="calico-system" Pod="calico-apiserver-89cb875f9-6xnb6" WorkloadEndpoint="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:19:06.620316 systemd-networkd[1908]: caliacef0f69325: Link UP Apr 13 20:19:06.623162 systemd-networkd[1908]: caliacef0f69325: Gained carrier Apr 13 20:19:06.627040 containerd[1996]: time="2026-04-13T20:19:06.627000494Z" level=info msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" Apr 13 20:19:06.633229 containerd[1996]: time="2026-04-13T20:19:06.633141478Z" level=info msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.252 [INFO][5513] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0 coredns-66bc5c9577- kube-system 3e807f6a-3dfa-4c8a-9873-de1da77007ef 1025 0 2026-04-13 20:18:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-102 coredns-66bc5c9577-ktw4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliacef0f69325 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.252 [INFO][5513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.378 [INFO][5537] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" HandleID="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.402 [INFO][5537] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" HandleID="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003819c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-102", "pod":"coredns-66bc5c9577-ktw4w", "timestamp":"2026-04-13 20:19:06.37883634 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d51e0)} Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.402 [INFO][5537] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.465 [INFO][5537] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.465 [INFO][5537] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.494 [INFO][5537] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.509 [INFO][5537] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.528 [INFO][5537] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.542 [INFO][5537] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.559 [INFO][5537] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.559 [INFO][5537] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.568 [INFO][5537] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2 Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.579 [INFO][5537] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.598 [INFO][5537] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.70/26] block=192.168.18.64/26 handle="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.598 [INFO][5537] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.70/26] handle="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" host="ip-172-31-17-102" Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.598 [INFO][5537] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:06.679321 containerd[1996]: 2026-04-13 20:19:06.598 [INFO][5537] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.70/26] IPv6=[] ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" HandleID="k8s-pod-network.3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.682068 containerd[1996]: 2026-04-13 20:19:06.610 [INFO][5513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3e807f6a-3dfa-4c8a-9873-de1da77007ef", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"coredns-66bc5c9577-ktw4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacef0f69325", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:06.682068 containerd[1996]: 2026-04-13 20:19:06.610 [INFO][5513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.70/32] ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.682068 containerd[1996]: 2026-04-13 20:19:06.610 [INFO][5513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacef0f69325 ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.682068 containerd[1996]: 2026-04-13 20:19:06.640 [INFO][5513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.682068 containerd[1996]: 2026-04-13 20:19:06.643 [INFO][5513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3e807f6a-3dfa-4c8a-9873-de1da77007ef", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2", Pod:"coredns-66bc5c9577-ktw4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacef0f69325", MAC:"fe:a3:b5:b4:fa:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:06.683473 containerd[1996]: 2026-04-13 20:19:06.670 [INFO][5513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2" Namespace="kube-system" Pod="coredns-66bc5c9577-ktw4w" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:19:06.752481 containerd[1996]: time="2026-04-13T20:19:06.738752597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:06.752481 containerd[1996]: time="2026-04-13T20:19:06.752122001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:06.752481 containerd[1996]: time="2026-04-13T20:19:06.752147590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:06.752481 containerd[1996]: time="2026-04-13T20:19:06.752302105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:06.801822 containerd[1996]: time="2026-04-13T20:19:06.800901534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:06.801822 containerd[1996]: time="2026-04-13T20:19:06.800992045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:06.801822 containerd[1996]: time="2026-04-13T20:19:06.801021428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:06.801822 containerd[1996]: time="2026-04-13T20:19:06.801151012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:06.850438 systemd[1]: Started cri-containerd-3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f.scope - libcontainer container 3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f. Apr 13 20:19:06.918476 systemd[1]: Started cri-containerd-3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2.scope - libcontainer container 3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2. Apr 13 20:19:06.964158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606749888.mount: Deactivated successfully. Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.692 [WARNING][5570] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0", GenerateName:"calico-kube-controllers-557fbc7964-", Namespace:"calico-system", SelfLink:"", UID:"d935e543-6716-4190-a20a-b6043d73a3aa", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557fbc7964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9", Pod:"calico-kube-controllers-557fbc7964-q7nls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic554bd944ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.692 [INFO][5570] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.692 [INFO][5570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" iface="eth0" netns="" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.692 [INFO][5570] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.692 [INFO][5570] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.913 [INFO][5626] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.914 [INFO][5626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.914 [INFO][5626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.968 [WARNING][5626] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.968 [INFO][5626] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.981 [INFO][5626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:07.021895 containerd[1996]: 2026-04-13 20:19:06.996 [INFO][5570] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.024685 containerd[1996]: time="2026-04-13T20:19:07.022312716Z" level=info msg="TearDown network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" successfully" Apr 13 20:19:07.024685 containerd[1996]: time="2026-04-13T20:19:07.022353777Z" level=info msg="StopPodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" returns successfully" Apr 13 20:19:07.024685 containerd[1996]: time="2026-04-13T20:19:07.024319481Z" level=info msg="RemovePodSandbox for \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" Apr 13 20:19:07.024685 containerd[1996]: time="2026-04-13T20:19:07.024356493Z" level=info msg="Forcibly stopping sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\"" Apr 13 20:19:07.026466 systemd-networkd[1908]: calic554bd944ec: Gained IPv6LL Apr 13 20:19:07.127186 containerd[1996]: time="2026-04-13T20:19:07.127045365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ktw4w,Uid:3e807f6a-3dfa-4c8a-9873-de1da77007ef,Namespace:kube-system,Attempt:1,} returns sandbox id \"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2\"" Apr 13 20:19:07.218799 containerd[1996]: time="2026-04-13T20:19:07.218204860Z" level=info msg="CreateContainer within sandbox \"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:19:07.275916 containerd[1996]: time="2026-04-13T20:19:07.275510324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-89cb875f9-6xnb6,Uid:693a1ffc-7980-4490-bc2b-aca384d54013,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f\"" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.004 [INFO][5602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.004 [INFO][5602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" iface="eth0" netns="/var/run/netns/cni-8df6da81-e09e-8c2f-14df-4b186afa94bf" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.005 [INFO][5602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" iface="eth0" netns="/var/run/netns/cni-8df6da81-e09e-8c2f-14df-4b186afa94bf" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.005 [INFO][5602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" iface="eth0" netns="/var/run/netns/cni-8df6da81-e09e-8c2f-14df-4b186afa94bf" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.005 [INFO][5602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.006 [INFO][5602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.225 [INFO][5714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.225 [INFO][5714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.225 [INFO][5714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.270 [WARNING][5714] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.270 [INFO][5714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.283 [INFO][5714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:07.325519 containerd[1996]: 2026-04-13 20:19:07.296 [INFO][5602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:19:07.331252 containerd[1996]: time="2026-04-13T20:19:07.326203312Z" level=info msg="TearDown network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" successfully" Apr 13 20:19:07.331252 containerd[1996]: time="2026-04-13T20:19:07.329361019Z" level=info msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" returns successfully" Apr 13 20:19:07.334687 systemd[1]: run-netns-cni\x2d8df6da81\x2de09e\x2d8c2f\x2d14df\x2d4b186afa94bf.mount: Deactivated successfully. Apr 13 20:19:07.340569 containerd[1996]: time="2026-04-13T20:19:07.339654923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9jf8f,Uid:b00fbc7c-661e-42f4-86eb-d3bcca719bc6,Namespace:kube-system,Attempt:1,}" Apr 13 20:19:07.361432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425869118.mount: Deactivated successfully. Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.001 [INFO][5629] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.003 [INFO][5629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" iface="eth0" netns="/var/run/netns/cni-3f03312c-0ef9-c3ee-60cb-5c25ac907f8e" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.004 [INFO][5629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" iface="eth0" netns="/var/run/netns/cni-3f03312c-0ef9-c3ee-60cb-5c25ac907f8e" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.005 [INFO][5629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" iface="eth0" netns="/var/run/netns/cni-3f03312c-0ef9-c3ee-60cb-5c25ac907f8e" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.006 [INFO][5629] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.006 [INFO][5629] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.348 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.351 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.351 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.390 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.390 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.396 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:07.419326 containerd[1996]: 2026-04-13 20:19:07.406 [INFO][5629] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:19:07.420593 containerd[1996]: time="2026-04-13T20:19:07.419984572Z" level=info msg="TearDown network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" successfully" Apr 13 20:19:07.420593 containerd[1996]: time="2026-04-13T20:19:07.420019950Z" level=info msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" returns successfully" Apr 13 20:19:07.428395 containerd[1996]: time="2026-04-13T20:19:07.428338412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdjzz,Uid:f5c81d3c-90a0-440b-96eb-db49837fb4b5,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:07.505978 containerd[1996]: time="2026-04-13T20:19:07.504678073Z" level=info msg="CreateContainer within sandbox \"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fee71939a2d3381eecf573a8e5b7829c956a9b81ebabc988daded0472249010d\"" Apr 13 20:19:07.508409 containerd[1996]: time="2026-04-13T20:19:07.508255061Z" level=info msg="StartContainer for \"fee71939a2d3381eecf573a8e5b7829c956a9b81ebabc988daded0472249010d\"" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.403 [WARNING][5731] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0", GenerateName:"calico-kube-controllers-557fbc7964-", Namespace:"calico-system", SelfLink:"", UID:"d935e543-6716-4190-a20a-b6043d73a3aa", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557fbc7964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9", Pod:"calico-kube-controllers-557fbc7964-q7nls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic554bd944ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.403 [INFO][5731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.403 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" iface="eth0" netns="" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.403 [INFO][5731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.403 [INFO][5731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.475 [INFO][5764] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.475 [INFO][5764] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.475 [INFO][5764] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.497 [WARNING][5764] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.497 [INFO][5764] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" HandleID="k8s-pod-network.388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Workload="ip--172--31--17--102-k8s-calico--kube--controllers--557fbc7964--q7nls-eth0" Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.512 [INFO][5764] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:07.546389 containerd[1996]: 2026-04-13 20:19:07.527 [INFO][5731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5" Apr 13 20:19:07.546389 containerd[1996]: time="2026-04-13T20:19:07.546325704Z" level=info msg="TearDown network for sandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" successfully" Apr 13 20:19:07.675531 containerd[1996]: time="2026-04-13T20:19:07.675480728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:19:07.675904 containerd[1996]: time="2026-04-13T20:19:07.675782424Z" level=info msg="RemovePodSandbox \"388c645b69a6ac099260b82ea6ed05ad23887519fbc45ce7c3c670c97dda8cd5\" returns successfully" Apr 13 20:19:07.676921 containerd[1996]: time="2026-04-13T20:19:07.676334883Z" level=info msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" Apr 13 20:19:07.853244 systemd[1]: Started cri-containerd-fee71939a2d3381eecf573a8e5b7829c956a9b81ebabc988daded0472249010d.scope - libcontainer container fee71939a2d3381eecf573a8e5b7829c956a9b81ebabc988daded0472249010d. Apr 13 20:19:07.934673 systemd[1]: run-netns-cni\x2d3f03312c\x2d0ef9\x2dc3ee\x2d60cb\x2d5c25ac907f8e.mount: Deactivated successfully. Apr 13 20:19:07.992613 containerd[1996]: time="2026-04-13T20:19:07.991621859Z" level=info msg="StartContainer for \"fee71939a2d3381eecf573a8e5b7829c956a9b81ebabc988daded0472249010d\" returns successfully" Apr 13 20:19:08.009633 systemd[1]: Started sshd@7-172.31.17.102:22-50.85.169.122:39672.service - OpenSSH per-connection server daemon (50.85.169.122:39672). Apr 13 20:19:08.051433 systemd-networkd[1908]: caliacef0f69325: Gained IPv6LL Apr 13 20:19:08.169077 systemd-networkd[1908]: cali39b5bd93dac: Link UP Apr 13 20:19:08.172475 systemd-networkd[1908]: cali39b5bd93dac: Gained carrier Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.740 [INFO][5774] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0 csi-node-driver- calico-system f5c81d3c-90a0-440b-96eb-db49837fb4b5 1036 0 2026-04-13 20:18:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-102 csi-node-driver-cdjzz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali39b5bd93dac [] [] }} ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.740 [INFO][5774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.931 [INFO][5823] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" HandleID="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.969 [INFO][5823] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" HandleID="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-102", "pod":"csi-node-driver-cdjzz", "timestamp":"2026-04-13 20:19:07.931442944 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f8420)} Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.970 [INFO][5823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.970 [INFO][5823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.970 [INFO][5823] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:07.980 [INFO][5823] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.013 [INFO][5823] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.033 [INFO][5823] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.038 [INFO][5823] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.044 [INFO][5823] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.045 [INFO][5823] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.057 [INFO][5823] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94 Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.104 [INFO][5823] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.129 [INFO][5823] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.71/26] block=192.168.18.64/26 handle="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.132 [INFO][5823] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.71/26] handle="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" host="ip-172-31-17-102" Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.134 [INFO][5823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:08.241256 containerd[1996]: 2026-04-13 20:19:08.135 [INFO][5823] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.71/26] IPv6=[] ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" HandleID="k8s-pod-network.57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.146 [INFO][5774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5c81d3c-90a0-440b-96eb-db49837fb4b5", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"csi-node-driver-cdjzz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39b5bd93dac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.147 [INFO][5774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.71/32] ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.147 [INFO][5774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39b5bd93dac ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.187 [INFO][5774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.188 [INFO][5774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5c81d3c-90a0-440b-96eb-db49837fb4b5", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94", Pod:"csi-node-driver-cdjzz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39b5bd93dac", MAC:"0a:ec:4f:3f:41:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:08.242645 containerd[1996]: 2026-04-13 20:19:08.228 [INFO][5774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94" Namespace="calico-system" Pod="csi-node-driver-cdjzz" WorkloadEndpoint="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:19:08.370850 systemd-networkd[1908]: cali56254d15e8f: Gained IPv6LL Apr 13 20:19:08.382780 containerd[1996]: time="2026-04-13T20:19:08.382090075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:08.382780 containerd[1996]: time="2026-04-13T20:19:08.382156857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:08.382780 containerd[1996]: time="2026-04-13T20:19:08.382193475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.382780 containerd[1996]: time="2026-04-13T20:19:08.382402364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.443458 systemd[1]: Started cri-containerd-57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94.scope - libcontainer container 57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94. Apr 13 20:19:08.484404 systemd-networkd[1908]: cali5ba3db61cbf: Link UP Apr 13 20:19:08.484691 systemd-networkd[1908]: cali5ba3db61cbf: Gained carrier Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:07.893 [WARNING][5798] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b3a8882f-9c17-49bf-8330-442e2e29fe2d", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da", Pod:"goldmane-cccfbd5cf-ngwcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic87c0994628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:07.894 [INFO][5798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:07.894 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" iface="eth0" netns="" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:07.894 [INFO][5798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:07.894 [INFO][5798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.218 [INFO][5851] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.218 [INFO][5851] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.429 [INFO][5851] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.472 [WARNING][5851] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.472 [INFO][5851] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.479 [INFO][5851] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:08.507935 containerd[1996]: 2026-04-13 20:19:08.490 [INFO][5798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:08.512758 containerd[1996]: time="2026-04-13T20:19:08.511432043Z" level=info msg="TearDown network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" successfully" Apr 13 20:19:08.512758 containerd[1996]: time="2026-04-13T20:19:08.511475083Z" level=info msg="StopPodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" returns successfully" Apr 13 20:19:08.520996 containerd[1996]: time="2026-04-13T20:19:08.519765726Z" level=info msg="RemovePodSandbox for \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" Apr 13 20:19:08.520996 containerd[1996]: time="2026-04-13T20:19:08.519811525Z" level=info msg="Forcibly stopping sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\"" Apr 13 20:19:08.525369 kubelet[3387]: I0413 20:19:08.524363 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ktw4w" podStartSLOduration=58.524342815 podStartE2EDuration="58.524342815s" podCreationTimestamp="2026-04-13 20:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:08.344950285 +0000 UTC m=+62.914358006" watchObservedRunningTime="2026-04-13 20:19:08.524342815 +0000 UTC m=+63.093750532" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:07.864 [INFO][5799] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0 coredns-66bc5c9577- kube-system b00fbc7c-661e-42f4-86eb-d3bcca719bc6 1037 0 2026-04-13 20:18:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-102 coredns-66bc5c9577-9jf8f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ba3db61cbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:07.870 [INFO][5799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.129 [INFO][5850] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" HandleID="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.143 [INFO][5850] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" HandleID="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102700), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-102", "pod":"coredns-66bc5c9577-9jf8f", "timestamp":"2026-04-13 20:19:08.129127273 +0000 UTC"}, Hostname:"ip-172-31-17-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002054a0)} Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.143 [INFO][5850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.143 [INFO][5850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.143 [INFO][5850] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-102' Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.150 [INFO][5850] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.176 [INFO][5850] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.214 [INFO][5850] ipam/ipam.go 526: Trying affinity for 192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.235 [INFO][5850] ipam/ipam.go 160: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.261 [INFO][5850] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.265 [INFO][5850] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.284 [INFO][5850] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008 Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.326 [INFO][5850] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.414 [INFO][5850] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.18.72/26] block=192.168.18.64/26 handle="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.419 [INFO][5850] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.18.72/26] handle="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" host="ip-172-31-17-102" Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.424 [INFO][5850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:08.533553 containerd[1996]: 2026-04-13 20:19:08.424 [INFO][5850] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.18.72/26] IPv6=[] ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" HandleID="k8s-pod-network.826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.537060 containerd[1996]: 2026-04-13 20:19:08.455 [INFO][5799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b00fbc7c-661e-42f4-86eb-d3bcca719bc6", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"", Pod:"coredns-66bc5c9577-9jf8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba3db61cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:08.537060 containerd[1996]: 2026-04-13 20:19:08.455 [INFO][5799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.72/32] ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.537060 containerd[1996]: 2026-04-13 20:19:08.455 [INFO][5799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ba3db61cbf ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.537060 containerd[1996]: 2026-04-13 20:19:08.481 [INFO][5799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.537060 containerd[1996]: 2026-04-13 20:19:08.488 [INFO][5799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b00fbc7c-661e-42f4-86eb-d3bcca719bc6", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008", Pod:"coredns-66bc5c9577-9jf8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba3db61cbf", MAC:"16:4e:32:3c:d2:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:08.538663 containerd[1996]: 2026-04-13 20:19:08.523 [INFO][5799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008" Namespace="kube-system" Pod="coredns-66bc5c9577-9jf8f" WorkloadEndpoint="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:19:08.632100 containerd[1996]: time="2026-04-13T20:19:08.631702984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:08.632100 containerd[1996]: time="2026-04-13T20:19:08.631780163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:08.632100 containerd[1996]: time="2026-04-13T20:19:08.631820855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.632100 containerd[1996]: time="2026-04-13T20:19:08.631971059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.706177 containerd[1996]: time="2026-04-13T20:19:08.705488314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdjzz,Uid:f5c81d3c-90a0-440b-96eb-db49837fb4b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94\"" Apr 13 20:19:08.757013 systemd[1]: Started cri-containerd-826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008.scope - libcontainer container 826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008. Apr 13 20:19:08.900781 containerd[1996]: time="2026-04-13T20:19:08.900691615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9jf8f,Uid:b00fbc7c-661e-42f4-86eb-d3bcca719bc6,Namespace:kube-system,Attempt:1,} returns sandbox id \"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008\"" Apr 13 20:19:08.952516 containerd[1996]: time="2026-04-13T20:19:08.952469361Z" level=info msg="CreateContainer within sandbox \"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:19:08.990329 containerd[1996]: time="2026-04-13T20:19:08.989820868Z" level=info msg="CreateContainer within sandbox \"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9dab23835085a357c94648a60e021807bf5d58b60be68c975faec43d9e7c2a15\"" Apr 13 20:19:08.992895 containerd[1996]: time="2026-04-13T20:19:08.992070631Z" level=info msg="StartContainer for \"9dab23835085a357c94648a60e021807bf5d58b60be68c975faec43d9e7c2a15\"" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.759 [WARNING][5959] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"b3a8882f-9c17-49bf-8330-442e2e29fe2d", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da", Pod:"goldmane-cccfbd5cf-ngwcv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic87c0994628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.760 [INFO][5959] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.760 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" iface="eth0" netns="" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.760 [INFO][5959] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.760 [INFO][5959] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.989 [INFO][6003] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.993 [INFO][6003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:08.993 [INFO][6003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:09.010 [WARNING][6003] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:09.010 [INFO][6003] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" HandleID="k8s-pod-network.504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Workload="ip--172--31--17--102-k8s-goldmane--cccfbd5cf--ngwcv-eth0" Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:09.013 [INFO][6003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:09.025909 containerd[1996]: 2026-04-13 20:19:09.017 [INFO][5959] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15" Apr 13 20:19:09.026793 containerd[1996]: time="2026-04-13T20:19:09.025946049Z" level=info msg="TearDown network for sandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" successfully" Apr 13 20:19:09.044246 containerd[1996]: time="2026-04-13T20:19:09.043384773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:19:09.044246 containerd[1996]: time="2026-04-13T20:19:09.043499657Z" level=info msg="RemovePodSandbox \"504573e240d52ad2f57249b199291bab350fd4662e0e59e191150d38f211dd15\" returns successfully" Apr 13 20:19:09.045585 containerd[1996]: time="2026-04-13T20:19:09.044442007Z" level=info msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" Apr 13 20:19:09.076370 systemd[1]: Started cri-containerd-9dab23835085a357c94648a60e021807bf5d58b60be68c975faec43d9e7c2a15.scope - libcontainer container 9dab23835085a357c94648a60e021807bf5d58b60be68c975faec43d9e7c2a15. Apr 13 20:19:09.102463 sshd[5868]: Accepted publickey for core from 50.85.169.122 port 39672 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:09.107766 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:09.119045 systemd-logind[1966]: New session 8 of user core. Apr 13 20:19:09.124685 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:19:09.180809 containerd[1996]: time="2026-04-13T20:19:09.180511211Z" level=info msg="StartContainer for \"9dab23835085a357c94648a60e021807bf5d58b60be68c975faec43d9e7c2a15\" returns successfully" Apr 13 20:19:09.368361 kubelet[3387]: I0413 20:19:09.366796 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9jf8f" podStartSLOduration=59.366772183 podStartE2EDuration="59.366772183s" podCreationTimestamp="2026-04-13 20:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:09.294390229 +0000 UTC m=+63.863797974" watchObservedRunningTime="2026-04-13 20:19:09.366772183 +0000 UTC m=+63.936179904" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.232 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"5a72eb83-1ce6-44af-8741-4e35c3bb2264", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd", Pod:"calico-apiserver-89cb875f9-5djhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali71ca19e027c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.234 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.234 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" iface="eth0" netns="" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.234 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.234 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.349 [INFO][6075] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.349 [INFO][6075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.349 [INFO][6075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.381 [WARNING][6075] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.381 [INFO][6075] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.385 [INFO][6075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:09.398840 containerd[1996]: 2026-04-13 20:19:09.390 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.398840 containerd[1996]: time="2026-04-13T20:19:09.398639945Z" level=info msg="TearDown network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" successfully" Apr 13 20:19:09.398840 containerd[1996]: time="2026-04-13T20:19:09.398697803Z" level=info msg="StopPodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" returns successfully" Apr 13 20:19:09.403314 containerd[1996]: time="2026-04-13T20:19:09.402865343Z" level=info msg="RemovePodSandbox for \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" Apr 13 20:19:09.403314 containerd[1996]: time="2026-04-13T20:19:09.402928811Z" level=info msg="Forcibly stopping sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\"" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.508 [WARNING][6091] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"5a72eb83-1ce6-44af-8741-4e35c3bb2264", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd", Pod:"calico-apiserver-89cb875f9-5djhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali71ca19e027c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.509 [INFO][6091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.509 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" iface="eth0" netns="" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.509 [INFO][6091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.510 [INFO][6091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.583 [INFO][6099] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.585 [INFO][6099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.585 [INFO][6099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.599 [WARNING][6099] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.599 [INFO][6099] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" HandleID="k8s-pod-network.0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--5djhb-eth0" Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.602 [INFO][6099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:09.620249 containerd[1996]: 2026-04-13 20:19:09.610 [INFO][6091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996" Apr 13 20:19:09.620249 containerd[1996]: time="2026-04-13T20:19:09.619664061Z" level=info msg="TearDown network for sandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" successfully" Apr 13 20:19:09.647476 containerd[1996]: time="2026-04-13T20:19:09.647369422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:19:09.647721 containerd[1996]: time="2026-04-13T20:19:09.647520577Z" level=info msg="RemovePodSandbox \"0bcc8216e509eb5cac76924ea87ce5cb4cf698e3eaff9680ee1ff91662cd0996\" returns successfully" Apr 13 20:19:09.651738 systemd-networkd[1908]: cali39b5bd93dac: Gained IPv6LL Apr 13 20:19:09.774075 containerd[1996]: time="2026-04-13T20:19:09.773884116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:09.777350 containerd[1996]: time="2026-04-13T20:19:09.777291208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:19:09.781256 containerd[1996]: time="2026-04-13T20:19:09.778088824Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:09.783706 containerd[1996]: time="2026-04-13T20:19:09.783618669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:09.785377 containerd[1996]: time="2026-04-13T20:19:09.785342785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.671637193s" Apr 13 20:19:09.785657 containerd[1996]: time="2026-04-13T20:19:09.785634076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:19:09.791669 containerd[1996]: time="2026-04-13T20:19:09.790913509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:19:09.800297 containerd[1996]: time="2026-04-13T20:19:09.800249691Z" level=info msg="CreateContainer within sandbox \"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:19:09.820719 containerd[1996]: time="2026-04-13T20:19:09.820684866Z" level=info msg="CreateContainer within sandbox \"8a9813f0c8b11f3059427215ef9325a1b784b36ecaee7dc1411c3afe2da146da\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c\"" Apr 13 20:19:09.824905 containerd[1996]: time="2026-04-13T20:19:09.823621976Z" level=info msg="StartContainer for \"da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c\"" Apr 13 20:19:09.866331 systemd[1]: Started cri-containerd-da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c.scope - libcontainer container da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c. Apr 13 20:19:09.939674 containerd[1996]: time="2026-04-13T20:19:09.939559225Z" level=info msg="StartContainer for \"da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c\" returns successfully" Apr 13 20:19:09.985530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438660064.mount: Deactivated successfully. Apr 13 20:19:10.396457 kubelet[3387]: I0413 20:19:10.393322 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-ngwcv" podStartSLOduration=39.717418594 podStartE2EDuration="46.39329823s" podCreationTimestamp="2026-04-13 20:18:24 +0000 UTC" firstStartedPulling="2026-04-13 20:19:03.113262776 +0000 UTC m=+57.682670477" lastFinishedPulling="2026-04-13 20:19:09.789142415 +0000 UTC m=+64.358550113" observedRunningTime="2026-04-13 20:19:10.361301341 +0000 UTC m=+64.930709088" watchObservedRunningTime="2026-04-13 20:19:10.39329823 +0000 UTC m=+64.962705951" Apr 13 20:19:10.486836 systemd-networkd[1908]: cali5ba3db61cbf: Gained IPv6LL Apr 13 20:19:10.838586 sshd[5868]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:10.848184 systemd[1]: sshd@7-172.31.17.102:22-50.85.169.122:39672.service: Deactivated successfully. Apr 13 20:19:10.852584 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:19:10.853732 systemd-logind[1966]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:19:10.855494 systemd-logind[1966]: Removed session 8. Apr 13 20:19:12.391305 systemd[1]: run-containerd-runc-k8s.io-da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c-runc.28DvuM.mount: Deactivated successfully. Apr 13 20:19:12.600806 ntpd[1958]: Listen normally on 11 calic87c0994628 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 11 calic87c0994628 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 12 cali71ca19e027c [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 13 calic554bd944ec [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 14 cali56254d15e8f [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 15 caliacef0f69325 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 16 cali39b5bd93dac [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 20:19:12.603359 ntpd[1958]: 13 Apr 20:19:12 ntpd[1958]: Listen normally on 17 cali5ba3db61cbf [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:19:12.600888 ntpd[1958]: Listen normally on 12 cali71ca19e027c [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:19:12.600931 ntpd[1958]: Listen normally on 13 calic554bd944ec [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:19:12.600963 ntpd[1958]: Listen normally on 14 cali56254d15e8f [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:19:12.600993 ntpd[1958]: Listen normally on 15 caliacef0f69325 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 20:19:12.601030 ntpd[1958]: Listen normally on 16 cali39b5bd93dac [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 20:19:12.601059 ntpd[1958]: Listen normally on 17 cali5ba3db61cbf [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:19:13.076575 containerd[1996]: time="2026-04-13T20:19:13.076344436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:13.120401 containerd[1996]: time="2026-04-13T20:19:13.077613950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:19:13.123237 containerd[1996]: time="2026-04-13T20:19:13.102864168Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:13.123490 containerd[1996]: time="2026-04-13T20:19:13.123460121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:13.129856 containerd[1996]: time="2026-04-13T20:19:13.129800492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.316107095s" Apr 13 20:19:13.131033 containerd[1996]: time="2026-04-13T20:19:13.130930594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:19:13.178161 containerd[1996]: time="2026-04-13T20:19:13.178120474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:19:13.285696 containerd[1996]: time="2026-04-13T20:19:13.285654411Z" level=info msg="CreateContainer within sandbox \"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:19:13.342767 containerd[1996]: time="2026-04-13T20:19:13.342637252Z" level=info msg="CreateContainer within sandbox \"91298db86f85f22aff449b1d3fc4b8799f99133c1f5c5bad737cc47cc5a723bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"741a41c2ae3f2f0416735a25e91a32c4164ff3d98589a09b7e110b03bdbc2754\"" Apr 13 20:19:13.352664 containerd[1996]: time="2026-04-13T20:19:13.352321796Z" level=info msg="StartContainer for \"741a41c2ae3f2f0416735a25e91a32c4164ff3d98589a09b7e110b03bdbc2754\"" Apr 13 20:19:13.458734 systemd[1]: Started cri-containerd-741a41c2ae3f2f0416735a25e91a32c4164ff3d98589a09b7e110b03bdbc2754.scope - libcontainer container 741a41c2ae3f2f0416735a25e91a32c4164ff3d98589a09b7e110b03bdbc2754. Apr 13 20:19:13.621918 containerd[1996]: time="2026-04-13T20:19:13.621797671Z" level=info msg="StartContainer for \"741a41c2ae3f2f0416735a25e91a32c4164ff3d98589a09b7e110b03bdbc2754\" returns successfully" Apr 13 20:19:15.476682 kubelet[3387]: I0413 20:19:15.476626 3387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:16.029717 systemd[1]: Started sshd@8-172.31.17.102:22-50.85.169.122:49216.service - OpenSSH per-connection server daemon (50.85.169.122:49216). Apr 13 20:19:17.173432 sshd[6288]: Accepted publickey for core from 50.85.169.122 port 49216 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:17.178461 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:17.188736 systemd-logind[1966]: New session 9 of user core. Apr 13 20:19:17.194506 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:19:18.269065 containerd[1996]: time="2026-04-13T20:19:18.268995745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:18.276377 containerd[1996]: time="2026-04-13T20:19:18.276287604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:19:18.329648 containerd[1996]: time="2026-04-13T20:19:18.329424483Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:18.373263 containerd[1996]: time="2026-04-13T20:19:18.335281171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:18.373263 containerd[1996]: time="2026-04-13T20:19:18.367948004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.189779868s" Apr 13 20:19:18.373263 containerd[1996]: time="2026-04-13T20:19:18.372913084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:19:18.403564 containerd[1996]: time="2026-04-13T20:19:18.402873590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:19:18.632505 containerd[1996]: time="2026-04-13T20:19:18.632393573Z" level=info msg="CreateContainer within sandbox \"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:19:18.675044 containerd[1996]: time="2026-04-13T20:19:18.674861615Z" level=info msg="CreateContainer within sandbox \"f119193808f264f197a0357326d07f35381f02fa14514442674b651d82d2bee9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"49c597089da2c854e8adea0154d2d11f2ed55d9822a1877572334d91ed3f2925\"" Apr 13 20:19:18.692802 containerd[1996]: time="2026-04-13T20:19:18.691869013Z" level=info msg="StartContainer for \"49c597089da2c854e8adea0154d2d11f2ed55d9822a1877572334d91ed3f2925\"" Apr 13 20:19:18.788627 containerd[1996]: time="2026-04-13T20:19:18.786937030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:19:18.788627 containerd[1996]: time="2026-04-13T20:19:18.786988148Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:18.796272 containerd[1996]: time="2026-04-13T20:19:18.795666420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 392.741695ms" Apr 13 20:19:18.796272 containerd[1996]: time="2026-04-13T20:19:18.795722996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:19:18.806315 containerd[1996]: time="2026-04-13T20:19:18.805492585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:19:18.840937 containerd[1996]: time="2026-04-13T20:19:18.840892895Z" level=info msg="CreateContainer within sandbox \"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:19:18.913756 containerd[1996]: time="2026-04-13T20:19:18.913630080Z" level=info msg="CreateContainer within sandbox \"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"81de80ae69741b37d0684aacce680429c5f803f54ae905523424fe6f5b0fa99a\"" Apr 13 20:19:18.916879 containerd[1996]: time="2026-04-13T20:19:18.916839790Z" level=info msg="StartContainer for \"81de80ae69741b37d0684aacce680429c5f803f54ae905523424fe6f5b0fa99a\"" Apr 13 20:19:19.064653 systemd[1]: Started cri-containerd-81de80ae69741b37d0684aacce680429c5f803f54ae905523424fe6f5b0fa99a.scope - libcontainer container 81de80ae69741b37d0684aacce680429c5f803f54ae905523424fe6f5b0fa99a. Apr 13 20:19:19.079342 systemd[1]: Started cri-containerd-49c597089da2c854e8adea0154d2d11f2ed55d9822a1877572334d91ed3f2925.scope - libcontainer container 49c597089da2c854e8adea0154d2d11f2ed55d9822a1877572334d91ed3f2925. Apr 13 20:19:19.263419 containerd[1996]: time="2026-04-13T20:19:19.263053807Z" level=info msg="StartContainer for \"49c597089da2c854e8adea0154d2d11f2ed55d9822a1877572334d91ed3f2925\" returns successfully" Apr 13 20:19:19.270266 containerd[1996]: time="2026-04-13T20:19:19.270196263Z" level=info msg="StartContainer for \"81de80ae69741b37d0684aacce680429c5f803f54ae905523424fe6f5b0fa99a\" returns successfully" Apr 13 20:19:19.548739 sshd[6288]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:19.554250 systemd[1]: sshd@8-172.31.17.102:22-50.85.169.122:49216.service: Deactivated successfully. Apr 13 20:19:19.559883 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:19:19.560987 systemd-logind[1966]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:19:19.564623 systemd-logind[1966]: Removed session 9. Apr 13 20:19:20.003778 kubelet[3387]: I0413 20:19:19.994154 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-89cb875f9-5djhb" podStartSLOduration=46.938221859 podStartE2EDuration="55.963860307s" podCreationTimestamp="2026-04-13 20:18:24 +0000 UTC" firstStartedPulling="2026-04-13 20:19:04.152071515 +0000 UTC m=+58.721479213" lastFinishedPulling="2026-04-13 20:19:13.177709939 +0000 UTC m=+67.747117661" observedRunningTime="2026-04-13 20:19:14.542784142 +0000 UTC m=+69.112191863" watchObservedRunningTime="2026-04-13 20:19:19.963860307 +0000 UTC m=+74.533268028" Apr 13 20:19:20.005140 kubelet[3387]: I0413 20:19:20.004379 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-89cb875f9-6xnb6" podStartSLOduration=44.48708421 podStartE2EDuration="56.004353399s" podCreationTimestamp="2026-04-13 20:18:24 +0000 UTC" firstStartedPulling="2026-04-13 20:19:07.287742244 +0000 UTC m=+61.857149955" lastFinishedPulling="2026-04-13 20:19:18.805011443 +0000 UTC m=+73.374419144" observedRunningTime="2026-04-13 20:19:19.886830736 +0000 UTC m=+74.456238457" watchObservedRunningTime="2026-04-13 20:19:20.004353399 +0000 UTC m=+74.573761121" Apr 13 20:19:20.009320 kubelet[3387]: I0413 20:19:20.006907 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-557fbc7964-q7nls" podStartSLOduration=42.135487873 podStartE2EDuration="55.006889s" podCreationTimestamp="2026-04-13 20:18:25 +0000 UTC" firstStartedPulling="2026-04-13 20:19:05.531184704 +0000 UTC m=+60.100592420" lastFinishedPulling="2026-04-13 20:19:18.402585848 +0000 UTC m=+72.971993547" observedRunningTime="2026-04-13 20:19:20.003876245 +0000 UTC m=+74.573283987" watchObservedRunningTime="2026-04-13 20:19:20.006889 +0000 UTC m=+74.576296722" Apr 13 20:19:20.746784 kubelet[3387]: I0413 20:19:20.737259 3387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:22.009116 containerd[1996]: time="2026-04-13T20:19:22.009058605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:22.011581 containerd[1996]: time="2026-04-13T20:19:22.010695781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:19:22.012638 containerd[1996]: time="2026-04-13T20:19:22.012568504Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:22.015502 containerd[1996]: time="2026-04-13T20:19:22.015418430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:22.016542 containerd[1996]: time="2026-04-13T20:19:22.016506462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.210964756s" Apr 13 20:19:22.016629 containerd[1996]: time="2026-04-13T20:19:22.016547492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:19:22.054070 containerd[1996]: time="2026-04-13T20:19:22.054022497Z" level=info msg="CreateContainer within sandbox \"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:19:22.096799 containerd[1996]: time="2026-04-13T20:19:22.096749244Z" level=info msg="CreateContainer within sandbox \"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e563b926bc07a25eb9ac2dafca28b22e90714fa5b5092664ed7c340330a8a08f\"" Apr 13 20:19:22.098391 containerd[1996]: time="2026-04-13T20:19:22.097639820Z" level=info msg="StartContainer for \"e563b926bc07a25eb9ac2dafca28b22e90714fa5b5092664ed7c340330a8a08f\"" Apr 13 20:19:22.198641 systemd[1]: Started cri-containerd-e563b926bc07a25eb9ac2dafca28b22e90714fa5b5092664ed7c340330a8a08f.scope - libcontainer container e563b926bc07a25eb9ac2dafca28b22e90714fa5b5092664ed7c340330a8a08f. Apr 13 20:19:22.305313 containerd[1996]: time="2026-04-13T20:19:22.304688763Z" level=info msg="StartContainer for \"e563b926bc07a25eb9ac2dafca28b22e90714fa5b5092664ed7c340330a8a08f\" returns successfully" Apr 13 20:19:22.339733 containerd[1996]: time="2026-04-13T20:19:22.339688291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:19:24.749892 systemd[1]: Started sshd@9-172.31.17.102:22-50.85.169.122:57048.service - OpenSSH per-connection server daemon (50.85.169.122:57048). Apr 13 20:19:25.084251 containerd[1996]: time="2026-04-13T20:19:25.082329063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:25.085624 containerd[1996]: time="2026-04-13T20:19:25.085505833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:19:25.087728 containerd[1996]: time="2026-04-13T20:19:25.087578915Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:25.102700 containerd[1996]: time="2026-04-13T20:19:25.101792410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:25.102700 containerd[1996]: time="2026-04-13T20:19:25.102538544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.762797549s" Apr 13 20:19:25.102700 containerd[1996]: time="2026-04-13T20:19:25.102579426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:19:25.146265 containerd[1996]: time="2026-04-13T20:19:25.146164988Z" level=info msg="CreateContainer within sandbox \"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:19:25.223488 containerd[1996]: time="2026-04-13T20:19:25.223341674Z" level=info msg="CreateContainer within sandbox \"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ca978cdd64495e0f9920662a5a2d137b0ff6a31b595ea36363d5be1343611f5\"" Apr 13 20:19:25.223846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082811880.mount: Deactivated successfully. Apr 13 20:19:25.224797 containerd[1996]: time="2026-04-13T20:19:25.224492044Z" level=info msg="StartContainer for \"5ca978cdd64495e0f9920662a5a2d137b0ff6a31b595ea36363d5be1343611f5\"" Apr 13 20:19:25.367805 systemd[1]: Started cri-containerd-5ca978cdd64495e0f9920662a5a2d137b0ff6a31b595ea36363d5be1343611f5.scope - libcontainer container 5ca978cdd64495e0f9920662a5a2d137b0ff6a31b595ea36363d5be1343611f5. Apr 13 20:19:25.447184 containerd[1996]: time="2026-04-13T20:19:25.445977082Z" level=info msg="StartContainer for \"5ca978cdd64495e0f9920662a5a2d137b0ff6a31b595ea36363d5be1343611f5\" returns successfully" Apr 13 20:19:25.968829 sshd[6520]: Accepted publickey for core from 50.85.169.122 port 57048 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:25.977195 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:25.990907 systemd-logind[1966]: New session 10 of user core. Apr 13 20:19:25.998016 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:19:26.310572 kubelet[3387]: I0413 20:19:26.308964 3387 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:19:26.314531 kubelet[3387]: I0413 20:19:26.314481 3387 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:19:27.829589 sshd[6520]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:27.834075 systemd-logind[1966]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:19:27.834881 systemd[1]: sshd@9-172.31.17.102:22-50.85.169.122:57048.service: Deactivated successfully. Apr 13 20:19:27.838733 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:19:27.840505 systemd-logind[1966]: Removed session 10. Apr 13 20:19:28.021059 systemd[1]: Started sshd@10-172.31.17.102:22-50.85.169.122:57050.service - OpenSSH per-connection server daemon (50.85.169.122:57050). Apr 13 20:19:29.114368 sshd[6590]: Accepted publickey for core from 50.85.169.122 port 57050 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:29.117080 sshd[6590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:29.122714 systemd-logind[1966]: New session 11 of user core. Apr 13 20:19:29.127409 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:19:30.141926 sshd[6590]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:30.155563 systemd[1]: sshd@10-172.31.17.102:22-50.85.169.122:57050.service: Deactivated successfully. Apr 13 20:19:30.161738 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:19:30.170383 systemd-logind[1966]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:19:30.184168 systemd-logind[1966]: Removed session 11. Apr 13 20:19:30.315900 systemd[1]: Started sshd@11-172.31.17.102:22-50.85.169.122:38546.service - OpenSSH per-connection server daemon (50.85.169.122:38546). Apr 13 20:19:31.407848 sshd[6607]: Accepted publickey for core from 50.85.169.122 port 38546 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:31.411303 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:31.421924 systemd-logind[1966]: New session 12 of user core. Apr 13 20:19:31.427498 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:19:32.344432 sshd[6607]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:32.351796 systemd[1]: sshd@11-172.31.17.102:22-50.85.169.122:38546.service: Deactivated successfully. Apr 13 20:19:32.354867 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:19:32.356577 systemd-logind[1966]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:19:32.357883 systemd-logind[1966]: Removed session 12. Apr 13 20:19:37.505674 systemd[1]: Started sshd@12-172.31.17.102:22-50.85.169.122:38552.service - OpenSSH per-connection server daemon (50.85.169.122:38552). Apr 13 20:19:38.030645 kubelet[3387]: I0413 20:19:38.030593 3387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:38.251854 kubelet[3387]: I0413 20:19:38.242255 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cdjzz" podStartSLOduration=56.853534361 podStartE2EDuration="1m13.240131101s" podCreationTimestamp="2026-04-13 20:18:25 +0000 UTC" firstStartedPulling="2026-04-13 20:19:08.717421342 +0000 UTC m=+63.286829042" lastFinishedPulling="2026-04-13 20:19:25.104018072 +0000 UTC m=+79.673425782" observedRunningTime="2026-04-13 20:19:26.194989143 +0000 UTC m=+80.764396866" watchObservedRunningTime="2026-04-13 20:19:38.240131101 +0000 UTC m=+92.809538823" Apr 13 20:19:38.514267 sshd[6632]: Accepted publickey for core from 50.85.169.122 port 38552 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:38.528421 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:38.538813 systemd-logind[1966]: New session 13 of user core. Apr 13 20:19:38.543865 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:19:38.750276 kubelet[3387]: I0413 20:19:38.749798 3387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:39.547578 sshd[6632]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:39.556530 systemd[1]: sshd@12-172.31.17.102:22-50.85.169.122:38552.service: Deactivated successfully. Apr 13 20:19:39.557046 systemd-logind[1966]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:19:39.561443 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:19:39.565614 systemd-logind[1966]: Removed session 13. Apr 13 20:19:39.727634 systemd[1]: Started sshd@13-172.31.17.102:22-50.85.169.122:43960.service - OpenSSH per-connection server daemon (50.85.169.122:43960). Apr 13 20:19:40.818898 sshd[6650]: Accepted publickey for core from 50.85.169.122 port 43960 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:40.821023 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:40.826087 systemd-logind[1966]: New session 14 of user core. Apr 13 20:19:40.834454 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:19:42.808368 sshd[6650]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:42.818204 systemd[1]: sshd@13-172.31.17.102:22-50.85.169.122:43960.service: Deactivated successfully. Apr 13 20:19:42.820983 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:19:42.823530 systemd-logind[1966]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:19:42.825448 systemd-logind[1966]: Removed session 14. Apr 13 20:19:42.960636 systemd[1]: Started sshd@14-172.31.17.102:22-50.85.169.122:43968.service - OpenSSH per-connection server daemon (50.85.169.122:43968). Apr 13 20:19:43.972313 sshd[6683]: Accepted publickey for core from 50.85.169.122 port 43968 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:43.974870 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:43.981140 systemd-logind[1966]: New session 15 of user core. Apr 13 20:19:43.987489 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:19:45.624527 sshd[6683]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:45.630118 systemd[1]: sshd@14-172.31.17.102:22-50.85.169.122:43968.service: Deactivated successfully. Apr 13 20:19:45.633856 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:19:45.635969 systemd-logind[1966]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:19:45.640600 systemd-logind[1966]: Removed session 15. Apr 13 20:19:45.820918 systemd[1]: Started sshd@15-172.31.17.102:22-50.85.169.122:43984.service - OpenSSH per-connection server daemon (50.85.169.122:43984). Apr 13 20:19:46.921252 sshd[6709]: Accepted publickey for core from 50.85.169.122 port 43984 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:46.923240 sshd[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:46.927820 systemd-logind[1966]: New session 16 of user core. Apr 13 20:19:46.935457 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:19:48.674667 sshd[6709]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:48.680588 systemd[1]: sshd@15-172.31.17.102:22-50.85.169.122:43984.service: Deactivated successfully. Apr 13 20:19:48.685095 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:19:48.688012 systemd-logind[1966]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:19:48.689815 systemd-logind[1966]: Removed session 16. Apr 13 20:19:48.852773 systemd[1]: Started sshd@16-172.31.17.102:22-50.85.169.122:43994.service - OpenSSH per-connection server daemon (50.85.169.122:43994). Apr 13 20:19:49.910349 sshd[6722]: Accepted publickey for core from 50.85.169.122 port 43994 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:49.911057 sshd[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:49.916020 systemd-logind[1966]: New session 17 of user core. Apr 13 20:19:49.921759 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:19:50.743260 sshd[6722]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:50.749608 systemd[1]: sshd@16-172.31.17.102:22-50.85.169.122:43994.service: Deactivated successfully. Apr 13 20:19:50.751965 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:19:50.754152 systemd-logind[1966]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:19:50.755966 systemd-logind[1966]: Removed session 17. Apr 13 20:19:55.907537 systemd[1]: Started sshd@17-172.31.17.102:22-50.85.169.122:43472.service - OpenSSH per-connection server daemon (50.85.169.122:43472). Apr 13 20:19:56.948085 sshd[6803]: Accepted publickey for core from 50.85.169.122 port 43472 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:56.951780 sshd[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:56.957291 systemd-logind[1966]: New session 18 of user core. Apr 13 20:19:56.965569 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:19:57.935642 sshd[6803]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:57.939347 systemd[1]: sshd@17-172.31.17.102:22-50.85.169.122:43472.service: Deactivated successfully. Apr 13 20:19:57.942249 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:19:57.945129 systemd-logind[1966]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:19:57.946653 systemd-logind[1966]: Removed session 18. Apr 13 20:20:03.130569 systemd[1]: Started sshd@18-172.31.17.102:22-50.85.169.122:43426.service - OpenSSH per-connection server daemon (50.85.169.122:43426). Apr 13 20:20:04.229465 sshd[6817]: Accepted publickey for core from 50.85.169.122 port 43426 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:04.232771 sshd[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:04.242664 systemd-logind[1966]: New session 19 of user core. Apr 13 20:20:04.252448 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:20:05.514325 sshd[6817]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:05.518983 systemd-logind[1966]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:20:05.521533 systemd[1]: sshd@18-172.31.17.102:22-50.85.169.122:43426.service: Deactivated successfully. Apr 13 20:20:05.524691 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:20:05.529792 systemd-logind[1966]: Removed session 19. Apr 13 20:20:09.761522 containerd[1996]: time="2026-04-13T20:20:09.735421384Z" level=info msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" Apr 13 20:20:10.696997 systemd[1]: Started sshd@19-172.31.17.102:22-50.85.169.122:49638.service - OpenSSH per-connection server daemon (50.85.169.122:49638). Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.411 [WARNING][6840] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5c81d3c-90a0-440b-96eb-db49837fb4b5", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94", Pod:"csi-node-driver-cdjzz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39b5bd93dac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.416 [INFO][6840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.417 [INFO][6840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" iface="eth0" netns="" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.417 [INFO][6840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.417 [INFO][6840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.876 [INFO][6847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.882 [INFO][6847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.883 [INFO][6847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.903 [WARNING][6847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.903 [INFO][6847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.905 [INFO][6847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:10.910420 containerd[1996]: 2026-04-13 20:20:10.907 [INFO][6840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:10.916075 containerd[1996]: time="2026-04-13T20:20:10.916016391Z" level=info msg="TearDown network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" successfully" Apr 13 20:20:10.916075 containerd[1996]: time="2026-04-13T20:20:10.916074743Z" level=info msg="StopPodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" returns successfully" Apr 13 20:20:10.934331 containerd[1996]: time="2026-04-13T20:20:10.934284386Z" level=info msg="RemovePodSandbox for \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" Apr 13 20:20:10.937532 containerd[1996]: time="2026-04-13T20:20:10.937484701Z" level=info msg="Forcibly stopping sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\"" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:10.998 [WARNING][6864] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5c81d3c-90a0-440b-96eb-db49837fb4b5", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"57f21c5a8952ab02705c514a3acb160a8c64e9eb537ddb39cbf5a151937b7e94", Pod:"csi-node-driver-cdjzz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39b5bd93dac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:10.998 [INFO][6864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:10.998 [INFO][6864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" iface="eth0" netns="" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:10.998 [INFO][6864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:10.998 [INFO][6864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.027 [INFO][6871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.027 [INFO][6871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.028 [INFO][6871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.037 [WARNING][6871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.038 [INFO][6871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" HandleID="k8s-pod-network.5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Workload="ip--172--31--17--102-k8s-csi--node--driver--cdjzz-eth0" Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.039 [INFO][6871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.046812 containerd[1996]: 2026-04-13 20:20:11.043 [INFO][6864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327" Apr 13 20:20:11.047953 containerd[1996]: time="2026-04-13T20:20:11.046806036Z" level=info msg="TearDown network for sandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" successfully" Apr 13 20:20:11.152237 containerd[1996]: time="2026-04-13T20:20:11.152000623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:11.152237 containerd[1996]: time="2026-04-13T20:20:11.152122868Z" level=info msg="RemovePodSandbox \"5d4e1d19899f966c387d2f2269772789a909536a84dd0de9cf02c01e5a22c327\" returns successfully" Apr 13 20:20:11.188243 containerd[1996]: time="2026-04-13T20:20:11.186445863Z" level=info msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.236 [WARNING][6886] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b00fbc7c-661e-42f4-86eb-d3bcca719bc6", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008", Pod:"coredns-66bc5c9577-9jf8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba3db61cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.237 [INFO][6886] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.237 [INFO][6886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" iface="eth0" netns="" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.237 [INFO][6886] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.237 [INFO][6886] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.266 [INFO][6896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.266 [INFO][6896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.266 [INFO][6896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.273 [WARNING][6896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.274 [INFO][6896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.277 [INFO][6896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.281952 containerd[1996]: 2026-04-13 20:20:11.279 [INFO][6886] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.285767 containerd[1996]: time="2026-04-13T20:20:11.281988755Z" level=info msg="TearDown network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" successfully" Apr 13 20:20:11.285767 containerd[1996]: time="2026-04-13T20:20:11.282019287Z" level=info msg="StopPodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" returns successfully" Apr 13 20:20:11.285767 containerd[1996]: time="2026-04-13T20:20:11.282598516Z" level=info msg="RemovePodSandbox for \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" Apr 13 20:20:11.285767 containerd[1996]: time="2026-04-13T20:20:11.282630578Z" level=info msg="Forcibly stopping sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\"" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.337 [WARNING][6910] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b00fbc7c-661e-42f4-86eb-d3bcca719bc6", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"826664d4c3d5903ba709dfaebe79ae698430fab4020ee451651dc0d3574dc008", Pod:"coredns-66bc5c9577-9jf8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba3db61cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.337 [INFO][6910] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.337 [INFO][6910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" iface="eth0" netns="" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.337 [INFO][6910] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.337 [INFO][6910] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.374 [INFO][6918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.374 [INFO][6918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.374 [INFO][6918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.381 [WARNING][6918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.381 [INFO][6918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" HandleID="k8s-pod-network.95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--9jf8f-eth0" Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.383 [INFO][6918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.404241 containerd[1996]: 2026-04-13 20:20:11.386 [INFO][6910] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a" Apr 13 20:20:11.404241 containerd[1996]: time="2026-04-13T20:20:11.401776570Z" level=info msg="TearDown network for sandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" successfully" Apr 13 20:20:11.410791 containerd[1996]: time="2026-04-13T20:20:11.408711948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:11.410791 containerd[1996]: time="2026-04-13T20:20:11.408877457Z" level=info msg="RemovePodSandbox \"95faba9f5b8aaf3a3c5238f7125e7731bf2451e573ecec33ca55fc0173a2f25a\" returns successfully" Apr 13 20:20:11.410791 containerd[1996]: time="2026-04-13T20:20:11.409833235Z" level=info msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.576 [WARNING][6933] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3e807f6a-3dfa-4c8a-9873-de1da77007ef", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2", Pod:"coredns-66bc5c9577-ktw4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacef0f69325", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.576 [INFO][6933] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.576 [INFO][6933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" iface="eth0" netns="" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.576 [INFO][6933] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.576 [INFO][6933] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.611 [INFO][6940] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.611 [INFO][6940] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.611 [INFO][6940] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.618 [WARNING][6940] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.618 [INFO][6940] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.620 [INFO][6940] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.625127 containerd[1996]: 2026-04-13 20:20:11.622 [INFO][6933] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.627956 containerd[1996]: time="2026-04-13T20:20:11.625178751Z" level=info msg="TearDown network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" successfully" Apr 13 20:20:11.627956 containerd[1996]: time="2026-04-13T20:20:11.625413791Z" level=info msg="StopPodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" returns successfully" Apr 13 20:20:11.627956 containerd[1996]: time="2026-04-13T20:20:11.627078140Z" level=info msg="RemovePodSandbox for \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" Apr 13 20:20:11.627956 containerd[1996]: time="2026-04-13T20:20:11.627146514Z" level=info msg="Forcibly stopping sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\"" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.677 [WARNING][6954] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3e807f6a-3dfa-4c8a-9873-de1da77007ef", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3fa7ceebd31f6e72f1b3ad1fff683f82e4c1d9fdb7aad2c876421c38648f75a2", Pod:"coredns-66bc5c9577-ktw4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacef0f69325", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.678 [INFO][6954] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.678 [INFO][6954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" iface="eth0" netns="" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.678 [INFO][6954] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.678 [INFO][6954] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.720 [INFO][6961] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.722 [INFO][6961] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.722 [INFO][6961] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.732 [WARNING][6961] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.732 [INFO][6961] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" HandleID="k8s-pod-network.5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Workload="ip--172--31--17--102-k8s-coredns--66bc5c9577--ktw4w-eth0" Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.745 [INFO][6961] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.755574 containerd[1996]: 2026-04-13 20:20:11.751 [INFO][6954] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae" Apr 13 20:20:11.756368 containerd[1996]: time="2026-04-13T20:20:11.755639322Z" level=info msg="TearDown network for sandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" successfully" Apr 13 20:20:11.761884 containerd[1996]: time="2026-04-13T20:20:11.761831370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:11.762114 containerd[1996]: time="2026-04-13T20:20:11.761949084Z" level=info msg="RemovePodSandbox \"5844ca6e5a326b4b1fbebfd022e08c03443fa9fe9642b58dc85a6926f96620ae\" returns successfully" Apr 13 20:20:11.762582 containerd[1996]: time="2026-04-13T20:20:11.762553409Z" level=info msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" Apr 13 20:20:11.822182 sshd[6852]: Accepted publickey for core from 50.85.169.122 port 49638 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:11.828576 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:11.835864 systemd-logind[1966]: New session 20 of user core. Apr 13 20:20:11.843681 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.826 [WARNING][6975] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"693a1ffc-7980-4490-bc2b-aca384d54013", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f", Pod:"calico-apiserver-89cb875f9-6xnb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56254d15e8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.826 [INFO][6975] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.826 [INFO][6975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" iface="eth0" netns="" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.826 [INFO][6975] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.826 [INFO][6975] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.871 [INFO][6983] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.871 [INFO][6983] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.871 [INFO][6983] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.878 [WARNING][6983] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.878 [INFO][6983] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.880 [INFO][6983] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.884794 containerd[1996]: 2026-04-13 20:20:11.882 [INFO][6975] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.884794 containerd[1996]: time="2026-04-13T20:20:11.884660536Z" level=info msg="TearDown network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" successfully" Apr 13 20:20:11.884794 containerd[1996]: time="2026-04-13T20:20:11.884682396Z" level=info msg="StopPodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" returns successfully" Apr 13 20:20:11.887942 containerd[1996]: time="2026-04-13T20:20:11.885427714Z" level=info msg="RemovePodSandbox for \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" Apr 13 20:20:11.887942 containerd[1996]: time="2026-04-13T20:20:11.885459142Z" level=info msg="Forcibly stopping sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\"" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.927 [WARNING][6999] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0", GenerateName:"calico-apiserver-89cb875f9-", Namespace:"calico-system", SelfLink:"", UID:"693a1ffc-7980-4490-bc2b-aca384d54013", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"89cb875f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-102", ContainerID:"3f473f046ccc297a5990babe5697893ad4b01b389207cb5108781689f0b8cb0f", Pod:"calico-apiserver-89cb875f9-6xnb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56254d15e8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.927 [INFO][6999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.927 [INFO][6999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" iface="eth0" netns="" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.928 [INFO][6999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.928 [INFO][6999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.961 [INFO][7006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.961 [INFO][7006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.962 [INFO][7006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.968 [WARNING][7006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.969 [INFO][7006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" HandleID="k8s-pod-network.cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Workload="ip--172--31--17--102-k8s-calico--apiserver--89cb875f9--6xnb6-eth0" Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.972 [INFO][7006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:11.978410 containerd[1996]: 2026-04-13 20:20:11.975 [INFO][6999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c" Apr 13 20:20:11.979854 containerd[1996]: time="2026-04-13T20:20:11.979203147Z" level=info msg="TearDown network for sandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" successfully" Apr 13 20:20:11.984324 containerd[1996]: time="2026-04-13T20:20:11.984121421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:11.984324 containerd[1996]: time="2026-04-13T20:20:11.984203845Z" level=info msg="RemovePodSandbox \"cc0876b6b40f7aa9591c474c2027d4ac1a95697b44a67f851b6c13447fe3b02c\" returns successfully" Apr 13 20:20:14.102741 sshd[6852]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:14.109448 systemd[1]: sshd@19-172.31.17.102:22-50.85.169.122:49638.service: Deactivated successfully. Apr 13 20:20:14.111951 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:20:14.113667 systemd-logind[1966]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:20:14.114997 systemd-logind[1966]: Removed session 20. Apr 13 20:20:18.140638 systemd[1]: run-containerd-runc-k8s.io-da942cca1a5c3a898caf8deae44c66a2efcd8d575ee8e75a201d837395486f1c-runc.HderFP.mount: Deactivated successfully. Apr 13 20:20:28.348079 kubelet[3387]: E0413 20:20:28.347998 3387 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": context deadline exceeded" Apr 13 20:20:28.707019 systemd[1]: cri-containerd-71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299.scope: Deactivated successfully. Apr 13 20:20:28.707713 systemd[1]: cri-containerd-71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299.scope: Consumed 3.965s CPU time, 16.7M memory peak, 0B memory swap peak. Apr 13 20:20:28.950228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299-rootfs.mount: Deactivated successfully. Apr 13 20:20:29.054824 containerd[1996]: time="2026-04-13T20:20:28.976114116Z" level=info msg="shim disconnected" id=71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299 namespace=k8s.io Apr 13 20:20:29.054824 containerd[1996]: time="2026-04-13T20:20:29.054820679Z" level=warning msg="cleaning up after shim disconnected" id=71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299 namespace=k8s.io Apr 13 20:20:29.055937 containerd[1996]: time="2026-04-13T20:20:29.054851576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:29.418222 systemd[1]: cri-containerd-6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04.scope: Deactivated successfully. Apr 13 20:20:29.418550 systemd[1]: cri-containerd-6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04.scope: Consumed 10.602s CPU time. Apr 13 20:20:29.447326 containerd[1996]: time="2026-04-13T20:20:29.447175380Z" level=info msg="shim disconnected" id=6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04 namespace=k8s.io Apr 13 20:20:29.447326 containerd[1996]: time="2026-04-13T20:20:29.447320260Z" level=warning msg="cleaning up after shim disconnected" id=6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04 namespace=k8s.io Apr 13 20:20:29.451534 containerd[1996]: time="2026-04-13T20:20:29.447335985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:29.451358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04-rootfs.mount: Deactivated successfully. Apr 13 20:20:29.473763 containerd[1996]: time="2026-04-13T20:20:29.473708161Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:20:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:20:30.060598 kubelet[3387]: I0413 20:20:30.060547 3387 scope.go:117] "RemoveContainer" containerID="6d22a468a328d0b9487851e099cec54f0a5721c9de828fe5b1653863aa6b9e04" Apr 13 20:20:30.069865 kubelet[3387]: I0413 20:20:30.069817 3387 scope.go:117] "RemoveContainer" containerID="71c7f6abc923d7e0f47eb845b0cb1290a4cc6dd666e908bb1d404b12463ea299" Apr 13 20:20:30.180576 containerd[1996]: time="2026-04-13T20:20:30.180505709Z" level=info msg="CreateContainer within sandbox \"3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 20:20:30.181319 containerd[1996]: time="2026-04-13T20:20:30.180505818Z" level=info msg="CreateContainer within sandbox \"51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:20:30.323562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount413415100.mount: Deactivated successfully. Apr 13 20:20:30.331100 containerd[1996]: time="2026-04-13T20:20:30.331047078Z" level=info msg="CreateContainer within sandbox \"51d6c93b85284d159b02991c6e39bf43d90b91f85159594611696ec6ad693b52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"299d1bb49b84cd1cb2df3cad65950b3feb79ba407901a0ae95cf4a29d2f06207\"" Apr 13 20:20:30.332055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549153935.mount: Deactivated successfully. Apr 13 20:20:30.332630 containerd[1996]: time="2026-04-13T20:20:30.332475581Z" level=info msg="CreateContainer within sandbox \"3d26f8306aa73610f54640568f90a0da3a3d3e8006693469b897481b8fea727a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4cb91588873c56a247690f6c578a62e95f4bd20fe9cda48c513e5c55cde1146e\"" Apr 13 20:20:30.335358 containerd[1996]: time="2026-04-13T20:20:30.334705493Z" level=info msg="StartContainer for \"4cb91588873c56a247690f6c578a62e95f4bd20fe9cda48c513e5c55cde1146e\"" Apr 13 20:20:30.335815 containerd[1996]: time="2026-04-13T20:20:30.335785944Z" level=info msg="StartContainer for \"299d1bb49b84cd1cb2df3cad65950b3feb79ba407901a0ae95cf4a29d2f06207\"" Apr 13 20:20:30.421649 systemd[1]: Started cri-containerd-299d1bb49b84cd1cb2df3cad65950b3feb79ba407901a0ae95cf4a29d2f06207.scope - libcontainer container 299d1bb49b84cd1cb2df3cad65950b3feb79ba407901a0ae95cf4a29d2f06207. Apr 13 20:20:30.424916 systemd[1]: Started cri-containerd-4cb91588873c56a247690f6c578a62e95f4bd20fe9cda48c513e5c55cde1146e.scope - libcontainer container 4cb91588873c56a247690f6c578a62e95f4bd20fe9cda48c513e5c55cde1146e. Apr 13 20:20:30.516182 containerd[1996]: time="2026-04-13T20:20:30.515787488Z" level=info msg="StartContainer for \"299d1bb49b84cd1cb2df3cad65950b3feb79ba407901a0ae95cf4a29d2f06207\" returns successfully" Apr 13 20:20:30.522915 containerd[1996]: time="2026-04-13T20:20:30.521979061Z" level=info msg="StartContainer for \"4cb91588873c56a247690f6c578a62e95f4bd20fe9cda48c513e5c55cde1146e\" returns successfully" Apr 13 20:20:34.401877 systemd[1]: cri-containerd-d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6.scope: Deactivated successfully. Apr 13 20:20:34.403859 systemd[1]: cri-containerd-d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6.scope: Consumed 2.786s CPU time, 13.8M memory peak, 0B memory swap peak. Apr 13 20:20:34.429819 containerd[1996]: time="2026-04-13T20:20:34.429751635Z" level=info msg="shim disconnected" id=d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6 namespace=k8s.io Apr 13 20:20:34.430459 containerd[1996]: time="2026-04-13T20:20:34.430267037Z" level=warning msg="cleaning up after shim disconnected" id=d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6 namespace=k8s.io Apr 13 20:20:34.430459 containerd[1996]: time="2026-04-13T20:20:34.430294641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:34.433975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6-rootfs.mount: Deactivated successfully. Apr 13 20:20:35.078376 kubelet[3387]: I0413 20:20:35.078344 3387 scope.go:117] "RemoveContainer" containerID="d304a6bc9b5b6ec7d6f3a5f401fd93360ffb7816cfdb1d3aab634714d2a5f0a6" Apr 13 20:20:35.081370 containerd[1996]: time="2026-04-13T20:20:35.081330570Z" level=info msg="CreateContainer within sandbox \"efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 20:20:35.107577 containerd[1996]: time="2026-04-13T20:20:35.107529365Z" level=info msg="CreateContainer within sandbox \"efd0f67b05d6aa36d5cde66bc62cf573f1ef9f4c49442a8fe47bb7458720da18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cde40aaced1e37f94f66512b2bd9a1149ea5a4de32248f7521cfb7514769fb9d\"" Apr 13 20:20:35.108384 containerd[1996]: time="2026-04-13T20:20:35.108237261Z" level=info msg="StartContainer for \"cde40aaced1e37f94f66512b2bd9a1149ea5a4de32248f7521cfb7514769fb9d\"" Apr 13 20:20:35.165475 systemd[1]: Started cri-containerd-cde40aaced1e37f94f66512b2bd9a1149ea5a4de32248f7521cfb7514769fb9d.scope - libcontainer container cde40aaced1e37f94f66512b2bd9a1149ea5a4de32248f7521cfb7514769fb9d. Apr 13 20:20:35.216275 containerd[1996]: time="2026-04-13T20:20:35.216201841Z" level=info msg="StartContainer for \"cde40aaced1e37f94f66512b2bd9a1149ea5a4de32248f7521cfb7514769fb9d\" returns successfully" Apr 13 20:20:38.349283 kubelet[3387]: E0413 20:20:38.348839 3387 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-102?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"