Apr 17 23:35:21.986163 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:35:21.986216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:21.986238 kernel: BIOS-provided physical RAM map: Apr 17 23:35:21.986249 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:35:21.986259 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:35:21.986269 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:35:21.986282 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:35:21.986293 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:35:21.986303 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:35:21.986318 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:35:21.986329 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:35:21.986339 kernel: NX (Execute Disable) protection: active Apr 17 23:35:21.986353 kernel: APIC: Static calls initialized Apr 17 23:35:21.986369 kernel: efi: EFI v2.7 by EDK II Apr 17 23:35:21.986384 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:35:21.986399 kernel: SMBIOS 2.7 present. Apr 17 23:35:21.986411 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:35:21.986423 kernel: Hypervisor detected: KVM Apr 17 23:35:21.986436 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:35:21.986466 kernel: kvm-clock: using sched offset of 4240449473 cycles Apr 17 23:35:21.986480 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:35:21.986494 kernel: tsc: Detected 2499.998 MHz processor Apr 17 23:35:21.986508 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:35:21.986522 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:35:21.986536 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:35:21.986554 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:35:21.986568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:35:21.986583 kernel: Using GB pages for direct mapping Apr 17 23:35:21.986596 kernel: Secure boot disabled Apr 17 23:35:21.986611 kernel: ACPI: Early table checksum verification disabled Apr 17 23:35:21.986626 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:35:21.986640 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:35:21.986655 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:35:21.986668 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:35:21.986683 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:35:21.986694 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:35:21.986706 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:35:21.986719 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:35:21.986733 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:35:21.986747 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:35:21.986767 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:35:21.986785 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:35:21.986800 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:35:21.986816 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:35:21.986828 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:35:21.986841 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:35:21.986855 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:35:21.986869 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:35:21.986886 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:35:21.986898 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:35:21.986912 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:35:21.986941 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:35:21.987010 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:35:21.987110 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:35:21.987134 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:35:21.987147 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:35:21.987161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:35:21.987179 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:35:21.987194 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:35:21.987209 kernel: Zone ranges: Apr 17 23:35:21.987225 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:35:21.987239 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:35:21.987255 kernel: Normal empty Apr 17 23:35:21.987270 kernel: Movable zone start for each node Apr 17 23:35:21.987285 kernel: Early memory node ranges Apr 17 23:35:21.987300 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:35:21.987316 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:35:21.987335 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:35:21.987351 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:35:21.987367 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:35:21.987382 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:35:21.987399 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:35:21.987415 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:35:21.987431 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:35:21.987446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:35:21.989520 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:35:21.989544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:35:21.989559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:35:21.989573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:35:21.989587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:35:21.989601 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:35:21.989616 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:35:21.989633 kernel: TSC deadline timer available Apr 17 23:35:21.989648 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:35:21.989664 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:35:21.989686 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:35:21.989703 kernel: Booting paravirtualized kernel on KVM Apr 17 23:35:21.989720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:35:21.989736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:35:21.989752 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:35:21.989768 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:35:21.989784 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:35:21.989799 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:35:21.989815 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:35:21.989839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:21.989856 kernel: random: crng init done Apr 17 23:35:21.989872 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:35:21.989888 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:35:21.989904 kernel: Fallback order for Node 0: 0 Apr 17 23:35:21.989920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:35:21.989936 kernel: Policy zone: DMA32 Apr 17 23:35:21.989953 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:35:21.989973 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:35:21.989989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:35:21.990004 kernel: Kernel/User page tables isolation: enabled Apr 17 23:35:21.990018 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:35:21.990033 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:35:21.990047 kernel: Dynamic Preempt: voluntary Apr 17 23:35:21.990062 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:35:21.990077 kernel: rcu: RCU event tracing is enabled. Apr 17 23:35:21.990091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:35:21.990109 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:35:21.990122 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:35:21.990137 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:35:21.990151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:35:21.990165 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:35:21.990179 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:35:21.990195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:35:21.990240 kernel: Console: colour dummy device 80x25 Apr 17 23:35:21.990255 kernel: printk: console [tty0] enabled Apr 17 23:35:21.990271 kernel: printk: console [ttyS0] enabled Apr 17 23:35:21.990288 kernel: ACPI: Core revision 20230628 Apr 17 23:35:21.990306 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:35:21.990326 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:35:21.990343 kernel: x2apic enabled Apr 17 23:35:21.990361 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:35:21.990377 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:35:21.990392 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 17 23:35:21.990409 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:35:21.990424 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:35:21.990438 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:35:21.990466 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:35:21.990480 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:35:21.990493 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:35:21.990507 kernel: RETBleed: Vulnerable Apr 17 23:35:21.990520 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:35:21.990532 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:35:21.990544 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:35:21.990559 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:35:21.990572 kernel: active return thunk: its_return_thunk Apr 17 23:35:21.990585 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:35:21.990598 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:35:21.990611 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:35:21.990624 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:35:21.990637 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:35:21.990650 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:35:21.990663 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:35:21.990675 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:35:21.990689 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:35:21.990707 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:35:21.990722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:35:21.990737 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:35:21.990750 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:35:21.990764 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:35:21.990777 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:35:21.990792 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:35:21.990806 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:35:21.990822 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:35:21.990836 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:35:21.990850 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:35:21.990863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:35:21.990880 kernel: landlock: Up and running. Apr 17 23:35:21.990894 kernel: SELinux: Initializing. Apr 17 23:35:21.990910 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:35:21.990924 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:35:21.990940 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Apr 17 23:35:21.990956 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:21.990972 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:21.990988 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:35:21.991004 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:35:21.991018 kernel: signal: max sigframe size: 3632 Apr 17 23:35:21.991038 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:35:21.991054 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:35:21.991071 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:35:21.991086 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:35:21.991102 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:35:21.991117 kernel: .... node #0, CPUs: #1 Apr 17 23:35:21.991135 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:35:21.991152 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:35:21.991171 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:35:21.991187 kernel: smpboot: Max logical packages: 1 Apr 17 23:35:21.991202 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 17 23:35:21.991218 kernel: devtmpfs: initialized Apr 17 23:35:21.991232 kernel: x86/mm: Memory block size: 128MB Apr 17 23:35:21.991247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:35:21.991263 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:35:21.991279 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:35:21.991294 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:35:21.991312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:35:21.991328 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:35:21.991343 kernel: audit: type=2000 audit(1776468921.853:1): state=initialized audit_enabled=0 res=1 Apr 17 23:35:21.991358 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:35:21.991374 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:35:21.991389 kernel: cpuidle: using governor menu Apr 17 23:35:21.991405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:35:21.991420 kernel: dca service started, version 1.12.1 Apr 17 23:35:21.991435 kernel: PCI: Using configuration type 1 for base access Apr 17 23:35:21.993836 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:35:21.993854 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:35:21.993868 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:35:21.993882 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:35:21.993896 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:35:21.993910 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:35:21.993924 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:35:21.993939 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:35:21.993953 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:35:21.993973 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:35:21.993987 kernel: ACPI: Interpreter enabled Apr 17 23:35:21.994001 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:35:21.994016 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:35:21.994030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:35:21.994044 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:35:21.994059 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:35:21.994073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:35:21.994334 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:35:21.994503 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:35:21.994639 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:35:21.994658 kernel: acpiphp: Slot [3] registered Apr 17 23:35:21.994674 kernel: acpiphp: Slot [4] registered Apr 17 23:35:21.994691 kernel: acpiphp: Slot [5] registered Apr 17 23:35:21.994707 kernel: acpiphp: Slot [6] registered Apr 17 23:35:21.994722 kernel: acpiphp: Slot [7] registered Apr 17 23:35:21.994737 kernel: acpiphp: Slot [8] registered Apr 17 23:35:21.994757 kernel: acpiphp: Slot [9] registered Apr 17 23:35:21.994772 kernel: acpiphp: Slot [10] registered Apr 17 23:35:21.994788 kernel: acpiphp: Slot [11] registered Apr 17 23:35:21.994803 kernel: acpiphp: Slot [12] registered Apr 17 23:35:21.994819 kernel: acpiphp: Slot [13] registered Apr 17 23:35:21.994834 kernel: acpiphp: Slot [14] registered Apr 17 23:35:21.994850 kernel: acpiphp: Slot [15] registered Apr 17 23:35:21.994865 kernel: acpiphp: Slot [16] registered Apr 17 23:35:21.994880 kernel: acpiphp: Slot [17] registered Apr 17 23:35:21.994899 kernel: acpiphp: Slot [18] registered Apr 17 23:35:21.994914 kernel: acpiphp: Slot [19] registered Apr 17 23:35:21.994929 kernel: acpiphp: Slot [20] registered Apr 17 23:35:21.994945 kernel: acpiphp: Slot [21] registered Apr 17 23:35:21.994959 kernel: acpiphp: Slot [22] registered Apr 17 23:35:21.994971 kernel: acpiphp: Slot [23] registered Apr 17 23:35:21.994986 kernel: acpiphp: Slot [24] registered Apr 17 23:35:21.995000 kernel: acpiphp: Slot [25] registered Apr 17 23:35:21.995014 kernel: acpiphp: Slot [26] registered Apr 17 23:35:21.995028 kernel: acpiphp: Slot [27] registered Apr 17 23:35:21.995047 kernel: acpiphp: Slot [28] registered Apr 17 23:35:21.995061 kernel: acpiphp: Slot [29] registered Apr 17 23:35:21.995076 kernel: acpiphp: Slot [30] registered Apr 17 23:35:21.995091 kernel: acpiphp: Slot [31] registered Apr 17 23:35:21.995105 kernel: PCI host bridge to bus 0000:00 Apr 17 23:35:21.995290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:35:21.995419 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:35:21.997633 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:35:21.997779 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:35:21.998026 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:35:21.998216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:35:21.998386 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:35:21.998592 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:35:21.998753 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:35:21.998905 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:35:21.999050 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:35:21.999198 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:35:21.999336 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:35:22.001666 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:35:22.001848 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:35:22.001988 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:35:22.002143 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:35:22.002294 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:35:22.002433 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:35:22.002595 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:35:22.002731 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:35:22.002882 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:35:22.003017 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:35:22.003165 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:35:22.003300 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:35:22.003321 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:35:22.003338 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:35:22.003355 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:35:22.003372 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:35:22.003388 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:35:22.003405 kernel: iommu: Default domain type: Translated Apr 17 23:35:22.003426 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:35:22.003443 kernel: efivars: Registered efivars operations Apr 17 23:35:22.005506 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:35:22.005526 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:35:22.005543 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:35:22.005559 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:35:22.005746 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:35:22.005894 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:35:22.009600 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:35:22.009637 kernel: vgaarb: loaded Apr 17 23:35:22.009656 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:35:22.009673 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:35:22.009690 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:35:22.009707 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:35:22.009724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:35:22.009740 kernel: pnp: PnP ACPI init Apr 17 23:35:22.009757 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:35:22.009780 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:35:22.009797 kernel: NET: Registered PF_INET protocol family Apr 17 23:35:22.009814 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:35:22.009832 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:35:22.009849 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:35:22.009866 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:35:22.009884 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:35:22.009901 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:35:22.009917 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:35:22.009938 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:35:22.009955 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:35:22.009971 kernel: NET: Registered PF_XDP protocol family Apr 17 23:35:22.010134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:35:22.010399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:35:22.010559 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:35:22.010700 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:35:22.010837 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:35:22.011015 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:35:22.011041 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:35:22.011060 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:35:22.011080 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 17 23:35:22.011100 kernel: clocksource: Switched to clocksource tsc Apr 17 23:35:22.011119 kernel: Initialise system trusted keyrings Apr 17 23:35:22.011138 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:35:22.011158 kernel: Key type asymmetric registered Apr 17 23:35:22.011173 kernel: Asymmetric key parser 'x509' registered Apr 17 23:35:22.011195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:35:22.011211 kernel: io scheduler mq-deadline registered Apr 17 23:35:22.011226 kernel: io scheduler kyber registered Apr 17 23:35:22.011239 kernel: io scheduler bfq registered Apr 17 23:35:22.011254 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:35:22.011271 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:35:22.011287 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:35:22.011302 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:35:22.011318 kernel: i8042: Warning: Keylock active Apr 17 23:35:22.011335 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:35:22.011349 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:35:22.013749 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:35:22.014036 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:35:22.014312 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:35:21 UTC (1776468921) Apr 17 23:35:22.014582 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:35:22.014606 kernel: intel_pstate: CPU model not supported Apr 17 23:35:22.014629 kernel: efifb: probing for efifb Apr 17 23:35:22.014646 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:35:22.014664 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:35:22.014680 kernel: efifb: scrolling: redraw Apr 17 23:35:22.014696 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:35:22.014714 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:35:22.014732 kernel: fb0: EFI VGA frame buffer device Apr 17 23:35:22.014748 kernel: pstore: Using crash dump compression: deflate Apr 17 23:35:22.014765 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:35:22.014782 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:35:22.014803 kernel: Segment Routing with IPv6 Apr 17 23:35:22.014820 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:35:22.014837 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:35:22.014855 kernel: Key type dns_resolver registered Apr 17 23:35:22.014872 kernel: IPI shorthand broadcast: enabled Apr 17 23:35:22.014918 kernel: sched_clock: Marking stable (548001916, 181044796)->(839655941, -110609229) Apr 17 23:35:22.014940 kernel: registered taskstats version 1 Apr 17 23:35:22.014958 kernel: Loading compiled-in X.509 certificates Apr 17 23:35:22.014976 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:35:22.014997 kernel: Key type .fscrypt registered Apr 17 23:35:22.015014 kernel: Key type fscrypt-provisioning registered Apr 17 23:35:22.015031 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:35:22.015049 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:35:22.015066 kernel: ima: No architecture policies found Apr 17 23:35:22.015083 kernel: clk: Disabling unused clocks Apr 17 23:35:22.015102 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:35:22.015119 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:35:22.015137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:35:22.015158 kernel: Run /init as init process Apr 17 23:35:22.015176 kernel: with arguments: Apr 17 23:35:22.015194 kernel: /init Apr 17 23:35:22.015212 kernel: with environment: Apr 17 23:35:22.015229 kernel: HOME=/ Apr 17 23:35:22.015246 kernel: TERM=linux Apr 17 23:35:22.015268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:35:22.015289 systemd[1]: Detected virtualization amazon. Apr 17 23:35:22.015312 systemd[1]: Detected architecture x86-64. Apr 17 23:35:22.015329 systemd[1]: Running in initrd. Apr 17 23:35:22.015346 systemd[1]: No hostname configured, using default hostname. Apr 17 23:35:22.015363 systemd[1]: Hostname set to . Apr 17 23:35:22.015382 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:35:22.015401 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:35:22.015419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:22.015437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:22.019182 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:35:22.019209 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:35:22.019241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:35:22.019266 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:35:22.019288 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:35:22.019306 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:35:22.019333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:22.019350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:22.019366 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:35:22.019383 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:35:22.019400 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:35:22.019416 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:35:22.019435 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:35:22.019464 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:35:22.019481 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:35:22.019498 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:35:22.019514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:22.019531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:22.019549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:22.019566 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:35:22.019586 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:35:22.019601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:35:22.019616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:35:22.019632 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:35:22.019648 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:35:22.019669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:35:22.019720 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:35:22.019764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:22.019783 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:35:22.019802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:22.019821 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:35:22.019844 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:35:22.019866 systemd-journald[179]: Journal started Apr 17 23:35:22.019904 systemd-journald[179]: Runtime Journal (/run/log/journal/ec250e4ab4079c242064970909796722) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:35:22.022329 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:35:22.028439 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:35:22.036482 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:35:22.037343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:22.039182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:35:22.049783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:22.056621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:35:22.062341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:35:22.080478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:35:22.086784 kernel: Bridge firewalling registered Apr 17 23:35:22.087200 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:35:22.090680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:22.100784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:35:22.103602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:22.112922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:22.115511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:22.124671 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:35:22.125739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:22.133733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:35:22.145627 dracut-cmdline[210]: dracut-dracut-053 Apr 17 23:35:22.151032 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:35:22.179698 systemd-resolved[213]: Positive Trust Anchors: Apr 17 23:35:22.179719 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:35:22.179787 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:35:22.188116 systemd-resolved[213]: Defaulting to hostname 'linux'. Apr 17 23:35:22.191562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:35:22.192267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:22.243489 kernel: SCSI subsystem initialized Apr 17 23:35:22.253480 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:35:22.265485 kernel: iscsi: registered transport (tcp) Apr 17 23:35:22.288656 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:35:22.288735 kernel: QLogic iSCSI HBA Driver Apr 17 23:35:22.329222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:35:22.335739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:35:22.364008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:35:22.364085 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:35:22.364107 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:35:22.408495 kernel: raid6: avx512x4 gen() 17929 MB/s Apr 17 23:35:22.426486 kernel: raid6: avx512x2 gen() 17827 MB/s Apr 17 23:35:22.444483 kernel: raid6: avx512x1 gen() 17882 MB/s Apr 17 23:35:22.462490 kernel: raid6: avx2x4 gen() 17711 MB/s Apr 17 23:35:22.480491 kernel: raid6: avx2x2 gen() 17696 MB/s Apr 17 23:35:22.499545 kernel: raid6: avx2x1 gen() 13773 MB/s Apr 17 23:35:22.499620 kernel: raid6: using algorithm avx512x4 gen() 17929 MB/s Apr 17 23:35:22.519558 kernel: raid6: .... xor() 7562 MB/s, rmw enabled Apr 17 23:35:22.519643 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:35:22.543501 kernel: xor: automatically using best checksumming function avx Apr 17 23:35:22.706489 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:35:22.718131 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:35:22.728761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:22.742361 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 17 23:35:22.747507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:22.757649 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:35:22.777697 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 17 23:35:22.810787 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:35:22.815679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:35:22.880406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:22.887872 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:35:22.913256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:35:22.916096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:35:22.918744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:22.919319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:35:22.929045 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:35:22.958925 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:35:22.987543 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:35:23.007354 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:35:23.007686 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:35:23.016879 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:35:23.042091 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:35:23.042535 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:35:23.042561 kernel: AES CTR mode by8 optimization enabled Apr 17 23:35:23.042583 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:7c:ea:fc:4d:17 Apr 17 23:35:23.017143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:23.018026 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:23.062438 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:35:23.066235 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:35:23.018704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:23.073388 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:35:23.018995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:23.040341 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:23.097363 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:35:23.097400 kernel: GPT:9289727 != 33554431 Apr 17 23:35:23.097420 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:35:23.097438 kernel: GPT:9289727 != 33554431 Apr 17 23:35:23.097475 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:35:23.097493 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:23.044123 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:23.059139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:23.092923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:23.093084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:23.109746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:23.126500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:23.129670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:35:23.153630 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:23.376445 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (458) Apr 17 23:35:23.377346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:35:23.390496 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (457) Apr 17 23:35:23.446114 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:35:23.468656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:35:23.474952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:35:23.475703 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:35:23.482677 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:35:23.494539 disk-uuid[633]: Primary Header is updated. Apr 17 23:35:23.494539 disk-uuid[633]: Secondary Entries is updated. Apr 17 23:35:23.494539 disk-uuid[633]: Secondary Header is updated. Apr 17 23:35:23.501508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:23.508505 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:23.516475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:24.515779 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:35:24.519507 disk-uuid[634]: The operation has completed successfully. Apr 17 23:35:24.658411 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:35:24.658571 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:35:24.685673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:35:24.691183 sh[977]: Success Apr 17 23:35:24.714608 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:35:24.816770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:35:24.826593 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:35:24.828878 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:35:24.864068 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:35:24.865863 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:24.865888 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:35:24.870594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:35:24.870673 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:35:24.977525 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:35:25.007835 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:35:25.009276 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:35:25.014691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:35:25.019669 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:35:25.040573 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:25.045817 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:25.045896 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:25.065481 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:25.080010 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:35:25.084250 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:25.091987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:35:25.100810 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:35:25.145542 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:35:25.151688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:35:25.176287 systemd-networkd[1170]: lo: Link UP Apr 17 23:35:25.176307 systemd-networkd[1170]: lo: Gained carrier Apr 17 23:35:25.178092 systemd-networkd[1170]: Enumeration completed Apr 17 23:35:25.178738 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:25.178747 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:35:25.179929 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:35:25.181272 systemd[1]: Reached target network.target - Network. Apr 17 23:35:25.182661 systemd-networkd[1170]: eth0: Link UP Apr 17 23:35:25.182667 systemd-networkd[1170]: eth0: Gained carrier Apr 17 23:35:25.182683 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:25.200607 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.30.7/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:35:25.571627 ignition[1107]: Ignition 2.19.0 Apr 17 23:35:25.571638 ignition[1107]: Stage: fetch-offline Apr 17 23:35:25.571852 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:25.573688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:35:25.571861 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:25.572411 ignition[1107]: Ignition finished successfully Apr 17 23:35:25.586860 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:35:25.602492 ignition[1179]: Ignition 2.19.0 Apr 17 23:35:25.602511 ignition[1179]: Stage: fetch Apr 17 23:35:25.602995 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:25.603009 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:25.603138 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:25.611864 ignition[1179]: PUT result: OK Apr 17 23:35:25.613709 ignition[1179]: parsed url from cmdline: "" Apr 17 23:35:25.613726 ignition[1179]: no config URL provided Apr 17 23:35:25.613746 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:35:25.613763 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:35:25.613786 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:25.614545 ignition[1179]: PUT result: OK Apr 17 23:35:25.614600 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:35:25.615218 ignition[1179]: GET result: OK Apr 17 23:35:25.615358 ignition[1179]: parsing config with SHA512: 5a12d9cfeb4f39881970e92e9198c09f526e6130a5376651b94d9fe39c57a3ddc5be7bdec53d3af8f18b4bdfea5463b03a94b4576aa9cac24c373fb2b699618c Apr 17 23:35:25.621150 unknown[1179]: fetched base config from "system" Apr 17 23:35:25.621518 unknown[1179]: fetched base config from "system" Apr 17 23:35:25.621527 unknown[1179]: fetched user config from "aws" Apr 17 23:35:25.622558 ignition[1179]: fetch: fetch complete Apr 17 23:35:25.622573 ignition[1179]: fetch: fetch passed Apr 17 23:35:25.622644 ignition[1179]: Ignition finished successfully Apr 17 23:35:25.625204 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:35:25.630704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:35:25.648262 ignition[1185]: Ignition 2.19.0 Apr 17 23:35:25.648280 ignition[1185]: Stage: kargs Apr 17 23:35:25.648788 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:25.648804 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:25.648920 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:25.649817 ignition[1185]: PUT result: OK Apr 17 23:35:25.652329 ignition[1185]: kargs: kargs passed Apr 17 23:35:25.652401 ignition[1185]: Ignition finished successfully Apr 17 23:35:25.654396 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:35:25.658720 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:35:25.676644 ignition[1191]: Ignition 2.19.0 Apr 17 23:35:25.676662 ignition[1191]: Stage: disks Apr 17 23:35:25.677150 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:25.677165 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:25.677281 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:25.678293 ignition[1191]: PUT result: OK Apr 17 23:35:25.680939 ignition[1191]: disks: disks passed Apr 17 23:35:25.681020 ignition[1191]: Ignition finished successfully Apr 17 23:35:25.683014 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:35:25.683748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:35:25.684116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:35:25.684703 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:35:25.685257 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:35:25.685880 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:35:25.691668 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:35:25.728842 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:35:25.732996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:35:25.739699 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:35:25.856743 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:35:25.857925 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:35:25.859935 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:35:25.876631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:35:25.880374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:35:25.881830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:35:25.881909 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:35:25.881943 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:35:25.899516 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Apr 17 23:35:25.912948 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:25.913019 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:25.913043 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:25.913813 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:35:25.925877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:35:25.932093 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:25.932814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:35:26.417035 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:35:26.435677 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:35:26.441500 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:35:26.447215 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:35:26.735387 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:35:26.742595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:35:26.745635 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:35:26.758381 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:35:26.759050 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:26.794904 ignition[1331]: INFO : Ignition 2.19.0 Apr 17 23:35:26.794904 ignition[1331]: INFO : Stage: mount Apr 17 23:35:26.794904 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:26.794904 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:26.799604 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:26.799604 ignition[1331]: INFO : PUT result: OK Apr 17 23:35:26.801377 ignition[1331]: INFO : mount: mount passed Apr 17 23:35:26.801995 ignition[1331]: INFO : Ignition finished successfully Apr 17 23:35:26.804490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:35:26.805256 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:35:26.812711 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:35:26.820999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:35:26.844487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Apr 17 23:35:26.844556 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:35:26.847987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:35:26.849907 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:35:26.857503 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:35:26.860435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:35:26.889491 ignition[1360]: INFO : Ignition 2.19.0 Apr 17 23:35:26.889491 ignition[1360]: INFO : Stage: files Apr 17 23:35:26.891108 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:26.891108 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:26.891108 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:26.891108 ignition[1360]: INFO : PUT result: OK Apr 17 23:35:26.893726 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:35:26.894697 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:35:26.894697 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:35:26.932736 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:35:26.934664 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:35:26.934664 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:35:26.933807 unknown[1360]: wrote ssh authorized keys file for user: core Apr 17 23:35:26.937984 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:35:26.937984 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:35:27.026052 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:35:27.184609 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:35:27.186627 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:35:27.187588 systemd-networkd[1170]: eth0: Gained IPv6LL Apr 17 23:35:27.705206 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:35:29.281058 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:35:29.281058 ignition[1360]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:35:29.284161 ignition[1360]: INFO : files: files passed Apr 17 23:35:29.284161 ignition[1360]: INFO : Ignition finished successfully Apr 17 23:35:29.285090 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:35:29.294677 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:35:29.297990 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:35:29.301780 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:35:29.301917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:35:29.316125 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:29.316125 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:29.319526 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:35:29.321688 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:35:29.323126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:35:29.331697 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:35:29.371555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:35:29.371736 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:35:29.373442 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:35:29.374608 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:35:29.375529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:35:29.382677 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:35:29.396616 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:35:29.403758 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:35:29.415543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:29.416518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:29.417546 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:35:29.418536 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:35:29.418757 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:35:29.419934 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:35:29.420898 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:35:29.421767 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:35:29.422698 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:35:29.423513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:35:29.424312 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:35:29.425137 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:35:29.425988 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:35:29.427351 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:35:29.428143 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:35:29.428863 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:35:29.429041 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:35:29.430330 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:29.431133 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:29.431852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:35:29.432183 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:29.432808 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:35:29.432977 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:35:29.434581 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:35:29.434767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:35:29.435512 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:35:29.435670 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:35:29.443877 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:35:29.444539 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:35:29.444912 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:29.447805 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:35:29.451679 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:35:29.451985 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:29.453810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:35:29.454041 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:35:29.464923 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:35:29.465057 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:35:29.474477 ignition[1412]: INFO : Ignition 2.19.0 Apr 17 23:35:29.474477 ignition[1412]: INFO : Stage: umount Apr 17 23:35:29.474477 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:35:29.474477 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:35:29.474477 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:35:29.481251 ignition[1412]: INFO : PUT result: OK Apr 17 23:35:29.481251 ignition[1412]: INFO : umount: umount passed Apr 17 23:35:29.481251 ignition[1412]: INFO : Ignition finished successfully Apr 17 23:35:29.485115 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:35:29.485276 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:35:29.486804 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:35:29.486867 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:35:29.489070 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:35:29.489146 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:35:29.489975 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:35:29.490036 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:35:29.491002 systemd[1]: Stopped target network.target - Network. Apr 17 23:35:29.491488 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:35:29.491567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:35:29.491871 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:35:29.492126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:35:29.498598 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:29.499180 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:35:29.500364 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:35:29.500866 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:35:29.500932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:35:29.501572 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:35:29.501626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:35:29.502348 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:35:29.502424 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:35:29.503127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:35:29.503190 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:35:29.503994 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:35:29.504668 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:35:29.507063 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:35:29.508592 systemd-networkd[1170]: eth0: DHCPv6 lease lost Apr 17 23:35:29.511333 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:35:29.511711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:35:29.512731 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:35:29.512896 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:35:29.517108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:35:29.517175 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:29.523611 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:35:29.525163 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:35:29.525238 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:35:29.527384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:35:29.527437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:29.527902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:35:29.527963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:29.528628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:35:29.528686 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:29.529423 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:29.546614 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:35:29.546853 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:29.548300 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:35:29.548446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:35:29.549953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:35:29.550046 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:29.551110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:35:29.551163 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:29.551874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:35:29.551947 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:35:29.553292 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:35:29.553362 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:35:29.554726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:35:29.554795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:35:29.562887 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:35:29.564407 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:35:29.564523 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:29.566621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:35:29.566698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:29.572698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:35:29.572841 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:35:29.681337 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:35:29.681510 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:35:29.683247 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:35:29.684321 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:35:29.684430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:35:29.689666 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:35:29.700076 systemd[1]: Switching root. Apr 17 23:35:29.735082 systemd-journald[179]: Journal stopped Apr 17 23:35:31.655213 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:35:31.655322 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:35:31.655346 kernel: SELinux: policy capability open_perms=1 Apr 17 23:35:31.655378 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:35:31.655399 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:35:31.655418 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:35:31.655439 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:35:31.656994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:35:31.657028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:35:31.657049 kernel: audit: type=1403 audit(1776468930.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:35:31.657072 systemd[1]: Successfully loaded SELinux policy in 58.836ms. Apr 17 23:35:31.657102 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.923ms. Apr 17 23:35:31.657122 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:35:31.657139 systemd[1]: Detected virtualization amazon. Apr 17 23:35:31.657156 systemd[1]: Detected architecture x86-64. Apr 17 23:35:31.657174 systemd[1]: Detected first boot. Apr 17 23:35:31.657196 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:35:31.657215 zram_generator::config[1454]: No configuration found. Apr 17 23:35:31.657233 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:35:31.657252 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:35:31.657271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:35:31.657293 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:35:31.657314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:35:31.657335 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:35:31.657359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:35:31.657377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:35:31.657399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:35:31.657419 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:35:31.657443 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:35:31.657485 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:35:31.657506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:35:31.657527 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:35:31.657555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:35:31.657580 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:35:31.657602 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:35:31.657624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:35:31.657644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:35:31.657668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:35:31.657688 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:35:31.657709 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:35:31.657731 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:35:31.657755 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:35:31.657775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:35:31.657801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:35:31.657822 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:35:31.657842 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:35:31.657863 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:35:31.657885 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:35:31.657906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:35:31.657927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:35:31.657950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:35:31.665703 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:35:31.665753 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:35:31.665776 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:35:31.665809 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:35:31.665833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:31.665857 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:35:31.665879 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:35:31.665901 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:35:31.665932 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:35:31.665956 systemd[1]: Reached target machines.target - Containers. Apr 17 23:35:31.665979 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:35:31.666002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:31.666025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:35:31.666049 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:35:31.666072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:31.666094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:35:31.666120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:31.666157 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:35:31.666182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:31.666205 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:35:31.666228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:35:31.669140 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:35:31.669173 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:35:31.669196 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:35:31.669221 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:35:31.669253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:35:31.669276 kernel: loop: module loaded Apr 17 23:35:31.669299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:35:31.669321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:35:31.669340 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:35:31.669358 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:35:31.669375 systemd[1]: Stopped verity-setup.service. Apr 17 23:35:31.669394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:31.669415 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:35:31.669440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:35:31.669527 systemd-journald[1536]: Collecting audit messages is disabled. Apr 17 23:35:31.669568 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:35:31.669590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:35:31.669611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:35:31.669637 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:35:31.669658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:35:31.669678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:35:31.669699 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:35:31.669722 systemd-journald[1536]: Journal started Apr 17 23:35:31.669770 systemd-journald[1536]: Runtime Journal (/run/log/journal/ec250e4ab4079c242064970909796722) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:35:31.266698 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:35:31.318780 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:35:31.674547 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:35:31.319240 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:35:31.673223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:31.673484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:31.674920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:31.675130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:31.677786 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:31.677983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:31.679655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:35:31.681476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:35:31.682650 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:35:31.692757 kernel: fuse: init (API version 7.39) Apr 17 23:35:31.695849 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:35:31.696089 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:35:31.713860 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:35:31.722667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:35:31.734744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:35:31.736117 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:35:31.736164 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:35:31.741056 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:35:31.754828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:35:31.762777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:35:31.764687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:31.768736 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:35:31.774699 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:35:31.775414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:35:31.776877 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:35:31.778676 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:35:31.781153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:35:31.786705 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:35:31.791826 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:35:31.793765 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:35:31.794813 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:35:31.812557 kernel: ACPI: bus type drm_connector registered Apr 17 23:35:31.807556 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:35:31.807817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:35:31.819322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:35:31.828838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:35:31.866602 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:35:31.867499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:35:31.883590 systemd-journald[1536]: Time spent on flushing to /var/log/journal/ec250e4ab4079c242064970909796722 is 74.626ms for 984 entries. Apr 17 23:35:31.883590 systemd-journald[1536]: System Journal (/var/log/journal/ec250e4ab4079c242064970909796722) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:35:31.972905 systemd-journald[1536]: Received client request to flush runtime journal. Apr 17 23:35:31.972987 kernel: loop0: detected capacity change from 0 to 140768 Apr 17 23:35:31.882718 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:35:31.901899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:35:31.966181 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:35:31.975652 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:35:31.977518 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:35:31.980747 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:35:31.991393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:35:31.995609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:35:31.998981 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:35:32.033853 udevadm[1596]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:35:32.039481 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:35:32.064479 kernel: loop1: detected capacity change from 0 to 219192 Apr 17 23:35:32.100361 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Apr 17 23:35:32.100392 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Apr 17 23:35:32.117378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:35:32.299531 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:35:32.437511 kernel: loop3: detected capacity change from 0 to 61336 Apr 17 23:35:32.596125 kernel: loop4: detected capacity change from 0 to 140768 Apr 17 23:35:32.627788 kernel: loop5: detected capacity change from 0 to 219192 Apr 17 23:35:32.661521 kernel: loop6: detected capacity change from 0 to 142488 Apr 17 23:35:32.683492 kernel: loop7: detected capacity change from 0 to 61336 Apr 17 23:35:32.699466 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:35:32.700144 (sd-merge)[1608]: Merged extensions into '/usr'. Apr 17 23:35:32.709063 systemd[1]: Reloading requested from client PID 1581 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:35:32.709501 systemd[1]: Reloading... Apr 17 23:35:32.793518 zram_generator::config[1630]: No configuration found. Apr 17 23:35:32.981081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:33.046267 systemd[1]: Reloading finished in 336 ms. Apr 17 23:35:33.088519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:35:33.089370 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:35:33.098720 systemd[1]: Starting ensure-sysext.service... Apr 17 23:35:33.102656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:35:33.107792 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:35:33.139360 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:35:33.139383 systemd[1]: Reloading... Apr 17 23:35:33.148559 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:35:33.149103 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:35:33.157683 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:35:33.158189 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Apr 17 23:35:33.158283 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Apr 17 23:35:33.164910 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:35:33.164930 systemd-tmpfiles[1687]: Skipping /boot Apr 17 23:35:33.187260 systemd-udevd[1688]: Using default interface naming scheme 'v255'. Apr 17 23:35:33.195082 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:35:33.195097 systemd-tmpfiles[1687]: Skipping /boot Apr 17 23:35:33.269517 zram_generator::config[1716]: No configuration found. Apr 17 23:35:33.425649 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:33.535502 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:35:33.556373 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:35:33.556847 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:35:33.556888 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 23:35:33.556914 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:35:33.626388 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Apr 17 23:35:33.646170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:33.723490 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1724) Apr 17 23:35:33.804515 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:35:33.811326 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:35:33.811575 systemd[1]: Reloading finished in 671 ms. Apr 17 23:35:33.834927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:35:33.837913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:35:33.961858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:35:33.964941 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:35:33.968789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:33.973862 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:33.993651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:35:33.994853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:34.000138 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:35:34.016752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:34.024293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:34.032485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:34.033392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:34.037791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:35:34.054908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:35:34.067854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:35:34.077864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:35:34.086820 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:35:34.089404 lvm[1884]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:35:34.093517 ldconfig[1576]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:35:34.103330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:35:34.104976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:34.114403 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:35:34.116339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:34.116558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:34.117986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:34.118773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:34.121164 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:34.121346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:34.140781 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:35:34.149882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:34.151817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:35:34.161033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:35:34.170815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:35:34.179669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:35:34.196310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:35:34.197265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:35:34.198901 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:35:34.205287 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:35:34.206188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:35:34.211955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:35:34.214337 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:35:34.219078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:35:34.220057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:35:34.220185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:35:34.222311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:35:34.222633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:35:34.224389 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:35:34.225188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:35:34.232860 systemd[1]: Finished ensure-sysext.service. Apr 17 23:35:34.241214 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:35:34.249387 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:35:34.249877 augenrules[1919]: No rules Apr 17 23:35:34.250546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:35:34.252656 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:34.261191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:35:34.268775 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:35:34.269396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:35:34.269505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:35:34.277809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:35:34.278401 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:35:34.290483 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:35:34.300830 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:35:34.316799 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:35:34.338364 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:35:34.342123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:35:34.409069 systemd-networkd[1894]: lo: Link UP Apr 17 23:35:34.409080 systemd-networkd[1894]: lo: Gained carrier Apr 17 23:35:34.411139 systemd-networkd[1894]: Enumeration completed Apr 17 23:35:34.411286 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:35:34.413064 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:34.413079 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:35:34.418377 systemd-networkd[1894]: eth0: Link UP Apr 17 23:35:34.418872 systemd-networkd[1894]: eth0: Gained carrier Apr 17 23:35:34.419013 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:35:34.422904 systemd-resolved[1896]: Positive Trust Anchors: Apr 17 23:35:34.422923 systemd-resolved[1896]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:35:34.422979 systemd-resolved[1896]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:35:34.423710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:35:34.428568 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.30.7/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:35:34.430095 systemd-resolved[1896]: Defaulting to hostname 'linux'. Apr 17 23:35:34.432542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:35:34.433351 systemd[1]: Reached target network.target - Network. Apr 17 23:35:34.434378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:35:34.435082 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:35:34.435893 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:35:34.436615 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:35:34.437503 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:35:34.438274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:35:34.438936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:35:34.439519 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:35:34.439562 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:35:34.440166 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:35:34.441094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:35:34.443193 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:35:34.453562 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:35:34.454935 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:35:34.455545 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:35:34.455966 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:35:34.456405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:35:34.456497 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:35:34.457728 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:35:34.461678 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:35:34.469738 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:35:34.474285 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:35:34.478016 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:35:34.479542 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:35:34.488141 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:35:34.497141 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:35:34.500676 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:35:34.504941 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:35:34.509197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:35:34.522552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:35:34.547662 jq[1954]: false Apr 17 23:35:34.548294 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:35:34.550423 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:35:34.551116 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:35:34.562349 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:35:34.584958 dbus-daemon[1953]: [system] SELinux support is enabled Apr 17 23:35:34.590631 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:35:34.602664 extend-filesystems[1955]: Found loop4 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found loop5 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found loop6 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found loop7 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p1 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p2 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p3 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found usr Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p4 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p6 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p7 Apr 17 23:35:34.602664 extend-filesystems[1955]: Found nvme0n1p9 Apr 17 23:35:34.602664 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Apr 17 23:35:34.592441 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:35:34.604951 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:35:34.612967 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:35:34.613234 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:35:34.624686 jq[1973]: true Apr 17 23:35:34.628186 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:35:34.628480 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:35:34.652901 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:35:34.652965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:35:34.655629 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:35:34.655665 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:35:34.667886 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:35:34.674079 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:35:34.674341 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:35:34.679641 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Apr 17 23:35:34.696284 jq[1984]: true Apr 17 23:35:34.694794 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:35:34.698164 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: ---------------------------------------------------- Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: corporation. Support and training for ntp-4 are Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: available at https://www.nwtime.org/support Apr 17 23:35:34.700918 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: ---------------------------------------------------- Apr 17 23:35:34.698198 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:35:34.709150 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: proto: precision = 0.095 usec (-23) Apr 17 23:35:34.709150 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: basedate set to 2026-04-05 Apr 17 23:35:34.709150 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: gps base set to 2026-04-05 (week 2413) Apr 17 23:35:34.709247 extend-filesystems[1996]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:35:34.722593 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:35:34.698210 ntpd[1957]: ---------------------------------------------------- Apr 17 23:35:34.698221 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:35:34.698230 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:35:34.698241 ntpd[1957]: corporation. Support and training for ntp-4 are Apr 17 23:35:34.698252 ntpd[1957]: available at https://www.nwtime.org/support Apr 17 23:35:34.698262 ntpd[1957]: ---------------------------------------------------- Apr 17 23:35:34.704384 ntpd[1957]: proto: precision = 0.095 usec (-23) Apr 17 23:35:34.705655 ntpd[1957]: basedate set to 2026-04-05 Apr 17 23:35:34.705675 ntpd[1957]: gps base set to 2026-04-05 (week 2413) Apr 17 23:35:34.729697 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listen normally on 3 eth0 172.31.30.7:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listen normally on 4 lo [::1]:123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: bind(21) AF_INET6 fe80::47c:eaff:fefc:4d17%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: unable to create socket on eth0 (5) for fe80::47c:eaff:fefc:4d17%2#123 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: failed to init interface for address fe80::47c:eaff:fefc:4d17%2 Apr 17 23:35:34.734243 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Apr 17 23:35:34.729763 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:35:34.729965 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:35:34.730001 ntpd[1957]: Listen normally on 3 eth0 172.31.30.7:123 Apr 17 23:35:34.730045 ntpd[1957]: Listen normally on 4 lo [::1]:123 Apr 17 23:35:34.730090 ntpd[1957]: bind(21) AF_INET6 fe80::47c:eaff:fefc:4d17%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:35:34.730114 ntpd[1957]: unable to create socket on eth0 (5) for fe80::47c:eaff:fefc:4d17%2#123 Apr 17 23:35:34.730146 ntpd[1957]: failed to init interface for address fe80::47c:eaff:fefc:4d17%2 Apr 17 23:35:34.730183 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Apr 17 23:35:34.749152 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:34.750702 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:34.750702 ntpd[1957]: 17 Apr 23:35:34 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:34.749194 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:35:34.753505 update_engine[1965]: I20260417 23:35:34.753377 1965 main.cc:92] Flatcar Update Engine starting Apr 17 23:35:34.775036 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:35:34.785682 update_engine[1965]: I20260417 23:35:34.779043 1965 update_check_scheduler.cc:74] Next update check in 4m24s Apr 17 23:35:34.779908 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:35:34.780894 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:35:34.798479 tar[1977]: linux-amd64/LICENSE Apr 17 23:35:34.798479 tar[1977]: linux-amd64/helm Apr 17 23:35:34.805172 (ntainerd)[2002]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:35:34.834411 systemd-logind[1963]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:35:34.834442 systemd-logind[1963]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 17 23:35:34.834492 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:35:34.837823 systemd-logind[1963]: New seat seat0. Apr 17 23:35:34.846447 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:35:34.851043 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1736) Apr 17 23:35:34.859046 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:35:34.881334 extend-filesystems[1996]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:35:34.881334 extend-filesystems[1996]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:35:34.881334 extend-filesystems[1996]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:35:34.904864 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:35:34.885866 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:35:34.886484 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:35:34.913990 coreos-metadata[1952]: Apr 17 23:35:34.913 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:35:34.916118 coreos-metadata[1952]: Apr 17 23:35:34.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:35:34.916118 coreos-metadata[1952]: Apr 17 23:35:34.915 INFO Fetch successful Apr 17 23:35:34.916118 coreos-metadata[1952]: Apr 17 23:35:34.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:35:34.927308 coreos-metadata[1952]: Apr 17 23:35:34.921 INFO Fetch successful Apr 17 23:35:34.927308 coreos-metadata[1952]: Apr 17 23:35:34.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:35:34.927308 coreos-metadata[1952]: Apr 17 23:35:34.927 INFO Fetch successful Apr 17 23:35:34.927308 coreos-metadata[1952]: Apr 17 23:35:34.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:35:34.932091 coreos-metadata[1952]: Apr 17 23:35:34.932 INFO Fetch successful Apr 17 23:35:34.932091 coreos-metadata[1952]: Apr 17 23:35:34.932 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:35:34.933314 coreos-metadata[1952]: Apr 17 23:35:34.932 INFO Fetch failed with 404: resource not found Apr 17 23:35:34.933314 coreos-metadata[1952]: Apr 17 23:35:34.932 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:35:34.942508 coreos-metadata[1952]: Apr 17 23:35:34.942 INFO Fetch successful Apr 17 23:35:34.942508 coreos-metadata[1952]: Apr 17 23:35:34.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:35:34.946475 coreos-metadata[1952]: Apr 17 23:35:34.944 INFO Fetch successful Apr 17 23:35:34.946475 coreos-metadata[1952]: Apr 17 23:35:34.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:35:34.947989 coreos-metadata[1952]: Apr 17 23:35:34.946 INFO Fetch successful Apr 17 23:35:34.947989 coreos-metadata[1952]: Apr 17 23:35:34.946 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:35:34.949913 coreos-metadata[1952]: Apr 17 23:35:34.949 INFO Fetch successful Apr 17 23:35:34.949913 coreos-metadata[1952]: Apr 17 23:35:34.949 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:35:34.952354 coreos-metadata[1952]: Apr 17 23:35:34.951 INFO Fetch successful Apr 17 23:35:34.953544 bash[2040]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:35:34.962231 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:35:34.978841 systemd[1]: Starting sshkeys.service... Apr 17 23:35:35.068179 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:35:35.079367 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:35:35.125372 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:35:35.125616 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:35:35.130799 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:35:35.131692 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1997 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:35:35.136389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:35:35.143582 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:35:35.194857 polkitd[2074]: Started polkitd version 121 Apr 17 23:35:35.204376 polkitd[2074]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:35:35.206597 polkitd[2074]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:35:35.208742 polkitd[2074]: Finished loading, compiling and executing 2 rules Apr 17 23:35:35.210645 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:35:35.210858 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:35:35.211873 polkitd[2074]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:35:35.223474 coreos-metadata[2056]: Apr 17 23:35:35.219 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:35:35.223474 coreos-metadata[2056]: Apr 17 23:35:35.220 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:35:35.223474 coreos-metadata[2056]: Apr 17 23:35:35.221 INFO Fetch successful Apr 17 23:35:35.223474 coreos-metadata[2056]: Apr 17 23:35:35.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:35:35.229952 coreos-metadata[2056]: Apr 17 23:35:35.229 INFO Fetch successful Apr 17 23:35:35.231392 unknown[2056]: wrote ssh authorized keys file for user: core Apr 17 23:35:35.232929 systemd-resolved[1896]: System hostname changed to 'ip-172-31-30-7'. Apr 17 23:35:35.233091 systemd-hostnamed[1997]: Hostname set to (transient) Apr 17 23:35:35.295478 update-ssh-keys[2093]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:35:35.297125 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:35:35.308532 systemd[1]: Finished sshkeys.service. Apr 17 23:35:35.367519 locksmithd[2012]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:35:35.614538 containerd[2002]: time="2026-04-17T23:35:35.613548818Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:35:35.699986 ntpd[1957]: bind(24) AF_INET6 fe80::47c:eaff:fefc:4d17%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:35:35.700502 ntpd[1957]: 17 Apr 23:35:35 ntpd[1957]: bind(24) AF_INET6 fe80::47c:eaff:fefc:4d17%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:35:35.700502 ntpd[1957]: 17 Apr 23:35:35 ntpd[1957]: unable to create socket on eth0 (6) for fe80::47c:eaff:fefc:4d17%2#123 Apr 17 23:35:35.700502 ntpd[1957]: 17 Apr 23:35:35 ntpd[1957]: failed to init interface for address fe80::47c:eaff:fefc:4d17%2 Apr 17 23:35:35.700031 ntpd[1957]: unable to create socket on eth0 (6) for fe80::47c:eaff:fefc:4d17%2#123 Apr 17 23:35:35.700047 ntpd[1957]: failed to init interface for address fe80::47c:eaff:fefc:4d17%2 Apr 17 23:35:35.729003 containerd[2002]: time="2026-04-17T23:35:35.727719323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.732693 containerd[2002]: time="2026-04-17T23:35:35.732622440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:35.733104 containerd[2002]: time="2026-04-17T23:35:35.733082801Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:35:35.733204 containerd[2002]: time="2026-04-17T23:35:35.733190709Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:35:35.734233 containerd[2002]: time="2026-04-17T23:35:35.733443738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:35:35.734545 containerd[2002]: time="2026-04-17T23:35:35.734346367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.734545 containerd[2002]: time="2026-04-17T23:35:35.734486718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:35.734545 containerd[2002]: time="2026-04-17T23:35:35.734507590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.735368 containerd[2002]: time="2026-04-17T23:35:35.735330381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:35.735500 containerd[2002]: time="2026-04-17T23:35:35.735483311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.735590 containerd[2002]: time="2026-04-17T23:35:35.735575539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:35.735718 containerd[2002]: time="2026-04-17T23:35:35.735694503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.736180 containerd[2002]: time="2026-04-17T23:35:35.736152680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.737727 containerd[2002]: time="2026-04-17T23:35:35.737206979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:35:35.737727 containerd[2002]: time="2026-04-17T23:35:35.737418132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:35:35.737727 containerd[2002]: time="2026-04-17T23:35:35.737439508Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:35:35.738497 containerd[2002]: time="2026-04-17T23:35:35.738471001Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:35:35.739059 containerd[2002]: time="2026-04-17T23:35:35.738688261Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.744940674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.745009594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.745035521Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.745057529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.745077547Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:35:35.746113 containerd[2002]: time="2026-04-17T23:35:35.745251590Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:35:35.747728 containerd[2002]: time="2026-04-17T23:35:35.747697945Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:35:35.748467 containerd[2002]: time="2026-04-17T23:35:35.748423781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:35:35.749217 containerd[2002]: time="2026-04-17T23:35:35.748597045Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:35:35.749217 containerd[2002]: time="2026-04-17T23:35:35.749095422Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:35:35.749217 containerd[2002]: time="2026-04-17T23:35:35.749118724Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749217 containerd[2002]: time="2026-04-17T23:35:35.749153326Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749217 containerd[2002]: time="2026-04-17T23:35:35.749172976Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749470 containerd[2002]: time="2026-04-17T23:35:35.749197213Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749470 containerd[2002]: time="2026-04-17T23:35:35.749439686Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749592 containerd[2002]: time="2026-04-17T23:35:35.749577631Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749867 containerd[2002]: time="2026-04-17T23:35:35.749742362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.749867 containerd[2002]: time="2026-04-17T23:35:35.749763873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:35:35.750369 containerd[2002]: time="2026-04-17T23:35:35.750220312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.750369 containerd[2002]: time="2026-04-17T23:35:35.750251708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.750369 containerd[2002]: time="2026-04-17T23:35:35.750302302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.750369 containerd[2002]: time="2026-04-17T23:35:35.750322918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.750369 containerd[2002]: time="2026-04-17T23:35:35.750344669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751209 containerd[2002]: time="2026-04-17T23:35:35.750833624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751209 containerd[2002]: time="2026-04-17T23:35:35.750860278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751209 containerd[2002]: time="2026-04-17T23:35:35.750881923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751209 containerd[2002]: time="2026-04-17T23:35:35.750926549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751444 containerd[2002]: time="2026-04-17T23:35:35.750960551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751444 containerd[2002]: time="2026-04-17T23:35:35.751399349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751444 containerd[2002]: time="2026-04-17T23:35:35.751424786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751911 containerd[2002]: time="2026-04-17T23:35:35.751634030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.751911 containerd[2002]: time="2026-04-17T23:35:35.751669388Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:35:35.752120 containerd[2002]: time="2026-04-17T23:35:35.752028041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.752120 containerd[2002]: time="2026-04-17T23:35:35.752052526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.752585 containerd[2002]: time="2026-04-17T23:35:35.752290905Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:35:35.752802 containerd[2002]: time="2026-04-17T23:35:35.752783654Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753008713Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753031620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753053585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753070582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753090604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753105760Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:35:35.754185 containerd[2002]: time="2026-04-17T23:35:35.753123457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:35:35.754513 containerd[2002]: time="2026-04-17T23:35:35.753543640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:35:35.754513 containerd[2002]: time="2026-04-17T23:35:35.753629616Z" level=info msg="Connect containerd service" Apr 17 23:35:35.755482 containerd[2002]: time="2026-04-17T23:35:35.755444311Z" level=info msg="using legacy CRI server" Apr 17 23:35:35.756800 containerd[2002]: time="2026-04-17T23:35:35.755777138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:35:35.756800 containerd[2002]: time="2026-04-17T23:35:35.755929353Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:35:35.758463 containerd[2002]: time="2026-04-17T23:35:35.758419875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:35:35.759397 containerd[2002]: time="2026-04-17T23:35:35.759353674Z" level=info msg="Start subscribing containerd event" Apr 17 23:35:35.759529 containerd[2002]: time="2026-04-17T23:35:35.759513622Z" level=info msg="Start recovering state" Apr 17 23:35:35.759687 containerd[2002]: time="2026-04-17T23:35:35.759674363Z" level=info msg="Start event monitor" Apr 17 23:35:35.760831 containerd[2002]: time="2026-04-17T23:35:35.760809897Z" level=info msg="Start snapshots syncer" Apr 17 23:35:35.760917 containerd[2002]: time="2026-04-17T23:35:35.760903801Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:35:35.760977 containerd[2002]: time="2026-04-17T23:35:35.760966388Z" level=info msg="Start streaming server" Apr 17 23:35:35.761595 containerd[2002]: time="2026-04-17T23:35:35.761576058Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:35:35.763768 containerd[2002]: time="2026-04-17T23:35:35.762377622Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:35:35.763768 containerd[2002]: time="2026-04-17T23:35:35.762567088Z" level=info msg="containerd successfully booted in 0.154043s" Apr 17 23:35:35.763097 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:35:36.116802 tar[1977]: linux-amd64/README.md Apr 17 23:35:36.135560 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:35:36.142664 sshd_keygen[1994]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:35:36.146596 systemd-networkd[1894]: eth0: Gained IPv6LL Apr 17 23:35:36.150958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:35:36.154374 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:35:36.162798 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:35:36.175738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:36.180219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:35:36.195485 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:35:36.208917 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:35:36.230010 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:35:36.239896 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:35:36.240140 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:35:36.251010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:35:36.265309 amazon-ssm-agent[2168]: Initializing new seelog logger Apr 17 23:35:36.265714 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Apr 17 23:35:36.265714 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.265714 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 processing appconfig overrides Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO Proxy environment variables: Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 processing appconfig overrides Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.268484 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 processing appconfig overrides Apr 17 23:35:36.271509 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.271509 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:35:36.271509 amazon-ssm-agent[2168]: 2026/04/17 23:35:36 processing appconfig overrides Apr 17 23:35:36.271867 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:35:36.282732 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:35:36.292692 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:35:36.293674 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:35:36.366753 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO https_proxy: Apr 17 23:35:36.465369 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO http_proxy: Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO no_proxy: Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO Agent will take identity from EC2 Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:36.493708 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [Registrar] Starting registrar module Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:35:36.494025 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:35:36.562572 amazon-ssm-agent[2168]: 2026-04-17 23:35:36 INFO [CredentialRefresher] Next credential rotation will be in 32.47499147391667 minutes Apr 17 23:35:37.270087 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:35:37.277866 systemd[1]: Started sshd@0-172.31.30.7:22-20.229.252.112:39522.service - OpenSSH per-connection server daemon (20.229.252.112:39522). Apr 17 23:35:37.510427 amazon-ssm-agent[2168]: 2026-04-17 23:35:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:35:37.611510 amazon-ssm-agent[2168]: 2026-04-17 23:35:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2201) started Apr 17 23:35:37.712873 amazon-ssm-agent[2168]: 2026-04-17 23:35:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:35:38.317289 sshd[2198]: Accepted publickey for core from 20.229.252.112 port 39522 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:38.320222 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:38.330358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:35:38.342685 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:35:38.348384 systemd-logind[1963]: New session 1 of user core. Apr 17 23:35:38.360335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:35:38.369891 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:35:38.377220 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:35:38.506314 systemd[2213]: Queued start job for default target default.target. Apr 17 23:35:38.517854 systemd[2213]: Created slice app.slice - User Application Slice. Apr 17 23:35:38.517909 systemd[2213]: Reached target paths.target - Paths. Apr 17 23:35:38.517930 systemd[2213]: Reached target timers.target - Timers. Apr 17 23:35:38.519661 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:35:38.541311 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:35:38.541504 systemd[2213]: Reached target sockets.target - Sockets. Apr 17 23:35:38.541537 systemd[2213]: Reached target basic.target - Basic System. Apr 17 23:35:38.541609 systemd[2213]: Reached target default.target - Main User Target. Apr 17 23:35:38.541650 systemd[2213]: Startup finished in 156ms. Apr 17 23:35:38.541824 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:35:38.548725 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:35:38.698721 ntpd[1957]: Listen normally on 7 eth0 [fe80::47c:eaff:fefc:4d17%2]:123 Apr 17 23:35:38.700038 ntpd[1957]: 17 Apr 23:35:38 ntpd[1957]: Listen normally on 7 eth0 [fe80::47c:eaff:fefc:4d17%2]:123 Apr 17 23:35:39.258949 systemd[1]: Started sshd@1-172.31.30.7:22-20.229.252.112:39532.service - OpenSSH per-connection server daemon (20.229.252.112:39532). Apr 17 23:35:39.620200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:39.621922 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:35:39.623724 systemd[1]: Startup finished in 683ms (kernel) + 8.546s (initrd) + 9.409s (userspace) = 18.639s. Apr 17 23:35:39.627658 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:40.237217 sshd[2224]: Accepted publickey for core from 20.229.252.112 port 39532 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:40.239055 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:40.244835 systemd-logind[1963]: New session 2 of user core. Apr 17 23:35:40.250724 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:35:40.919391 sshd[2224]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:40.923212 systemd[1]: sshd@1-172.31.30.7:22-20.229.252.112:39532.service: Deactivated successfully. Apr 17 23:35:40.925498 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:35:40.927173 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:35:40.928788 systemd-logind[1963]: Removed session 2. Apr 17 23:35:41.090703 systemd[1]: Started sshd@2-172.31.30.7:22-20.229.252.112:39542.service - OpenSSH per-connection server daemon (20.229.252.112:39542). Apr 17 23:35:42.808915 systemd-resolved[1896]: Clock change detected. Flushing caches. Apr 17 23:35:43.041707 kubelet[2231]: E0417 23:35:43.041615 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:43.044767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:43.045010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:43.045628 systemd[1]: kubelet.service: Consumed 1.026s CPU time. Apr 17 23:35:43.188398 sshd[2245]: Accepted publickey for core from 20.229.252.112 port 39542 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:43.190220 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:43.195365 systemd-logind[1963]: New session 3 of user core. Apr 17 23:35:43.201141 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:35:43.866093 sshd[2245]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:43.869740 systemd[1]: sshd@2-172.31.30.7:22-20.229.252.112:39542.service: Deactivated successfully. Apr 17 23:35:43.871921 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:35:43.873785 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:35:43.875091 systemd-logind[1963]: Removed session 3. Apr 17 23:35:44.038222 systemd[1]: Started sshd@3-172.31.30.7:22-20.229.252.112:39556.service - OpenSSH per-connection server daemon (20.229.252.112:39556). Apr 17 23:35:45.018608 sshd[2254]: Accepted publickey for core from 20.229.252.112 port 39556 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:45.026031 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:45.056988 systemd-logind[1963]: New session 4 of user core. Apr 17 23:35:45.067284 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:35:45.700232 sshd[2254]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:45.704749 systemd[1]: sshd@3-172.31.30.7:22-20.229.252.112:39556.service: Deactivated successfully. Apr 17 23:35:45.706710 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:35:45.708026 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:35:45.709257 systemd-logind[1963]: Removed session 4. Apr 17 23:35:45.872245 systemd[1]: Started sshd@4-172.31.30.7:22-20.229.252.112:40512.service - OpenSSH per-connection server daemon (20.229.252.112:40512). Apr 17 23:35:46.847682 sshd[2261]: Accepted publickey for core from 20.229.252.112 port 40512 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:46.849210 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:46.854819 systemd-logind[1963]: New session 5 of user core. Apr 17 23:35:46.862145 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:35:47.404020 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:35:47.404450 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:47.420818 sudo[2264]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:47.580431 sshd[2261]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:47.585097 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:35:47.586271 systemd[1]: sshd@4-172.31.30.7:22-20.229.252.112:40512.service: Deactivated successfully. Apr 17 23:35:47.588743 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:35:47.589872 systemd-logind[1963]: Removed session 5. Apr 17 23:35:47.752266 systemd[1]: Started sshd@5-172.31.30.7:22-20.229.252.112:40528.service - OpenSSH per-connection server daemon (20.229.252.112:40528). Apr 17 23:35:48.727125 sshd[2269]: Accepted publickey for core from 20.229.252.112 port 40528 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:48.729046 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:48.734179 systemd-logind[1963]: New session 6 of user core. Apr 17 23:35:48.741128 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:35:49.249725 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:35:49.250144 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:49.254519 sudo[2273]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:49.260290 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:35:49.260681 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:49.278248 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:49.280909 auditctl[2276]: No rules Apr 17 23:35:49.281342 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:35:49.281582 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:49.288352 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:35:49.318440 augenrules[2294]: No rules Apr 17 23:35:49.319258 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:35:49.322468 sudo[2272]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:49.482084 sshd[2269]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:49.486605 systemd[1]: sshd@5-172.31.30.7:22-20.229.252.112:40528.service: Deactivated successfully. Apr 17 23:35:49.488843 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:35:49.490741 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:35:49.492186 systemd-logind[1963]: Removed session 6. Apr 17 23:35:49.667416 systemd[1]: Started sshd@6-172.31.30.7:22-20.229.252.112:40544.service - OpenSSH per-connection server daemon (20.229.252.112:40544). Apr 17 23:35:50.679900 sshd[2302]: Accepted publickey for core from 20.229.252.112 port 40544 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:35:50.681173 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:50.687370 systemd-logind[1963]: New session 7 of user core. Apr 17 23:35:50.693196 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:35:51.217018 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:35:51.217418 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:35:51.906747 (dockerd)[2322]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:35:51.907169 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:35:52.998635 dockerd[2322]: time="2026-04-17T23:35:52.998565768Z" level=info msg="Starting up" Apr 17 23:35:53.295607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:35:53.303151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:53.625771 dockerd[2322]: time="2026-04-17T23:35:53.625634181Z" level=info msg="Loading containers: start." Apr 17 23:35:53.926954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:53.940059 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:35:53.953889 kernel: Initializing XFRM netlink socket Apr 17 23:35:53.996599 kubelet[2390]: E0417 23:35:53.996473 2390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:35:54.002373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:35:54.002575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:35:54.065260 (udev-worker)[2348]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:35:54.174070 systemd-networkd[1894]: docker0: Link UP Apr 17 23:35:54.191552 dockerd[2322]: time="2026-04-17T23:35:54.190831628Z" level=info msg="Loading containers: done." Apr 17 23:35:54.225918 dockerd[2322]: time="2026-04-17T23:35:54.225815167Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:35:54.226124 dockerd[2322]: time="2026-04-17T23:35:54.225991444Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:35:54.226177 dockerd[2322]: time="2026-04-17T23:35:54.226143083Z" level=info msg="Daemon has completed initialization" Apr 17 23:35:54.264361 dockerd[2322]: time="2026-04-17T23:35:54.264016108Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:35:54.264592 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:35:55.032837 containerd[2002]: time="2026-04-17T23:35:55.032796066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:35:55.647835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899424240.mount: Deactivated successfully. Apr 17 23:35:57.864012 containerd[2002]: time="2026-04-17T23:35:57.863960615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.868894 containerd[2002]: time="2026-04-17T23:35:57.868657727Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100514" Apr 17 23:35:57.874593 containerd[2002]: time="2026-04-17T23:35:57.874521844Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.879803 containerd[2002]: time="2026-04-17T23:35:57.879728330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.881639 containerd[2002]: time="2026-04-17T23:35:57.881113517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.848269109s" Apr 17 23:35:57.881639 containerd[2002]: time="2026-04-17T23:35:57.881164757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:35:57.882323 containerd[2002]: time="2026-04-17T23:35:57.882274975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:35:59.772874 containerd[2002]: time="2026-04-17T23:35:59.772796656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.778933 containerd[2002]: time="2026-04-17T23:35:59.778429307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252738" Apr 17 23:35:59.787651 containerd[2002]: time="2026-04-17T23:35:59.787564276Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.792700 containerd[2002]: time="2026-04-17T23:35:59.792366453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:59.794034 containerd[2002]: time="2026-04-17T23:35:59.793845007Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.91149433s" Apr 17 23:35:59.794034 containerd[2002]: time="2026-04-17T23:35:59.793921310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:35:59.795043 containerd[2002]: time="2026-04-17T23:35:59.794889170Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:36:01.353161 containerd[2002]: time="2026-04-17T23:36:01.352783444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:01.354797 containerd[2002]: time="2026-04-17T23:36:01.354723948Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810891" Apr 17 23:36:01.356656 containerd[2002]: time="2026-04-17T23:36:01.356596964Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:01.375785 containerd[2002]: time="2026-04-17T23:36:01.374892256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:01.381154 containerd[2002]: time="2026-04-17T23:36:01.380937474Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.586000852s" Apr 17 23:36:01.381154 containerd[2002]: time="2026-04-17T23:36:01.380991892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:36:01.394182 containerd[2002]: time="2026-04-17T23:36:01.392751110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:36:03.920724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154307983.mount: Deactivated successfully. Apr 17 23:36:04.047999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:36:04.070340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:04.673281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:04.717346 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:36:04.829657 kubelet[2559]: E0417 23:36:04.829457 2559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:36:04.832177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:36:04.832307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:36:05.070703 containerd[2002]: time="2026-04-17T23:36:05.070625615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:05.084534 containerd[2002]: time="2026-04-17T23:36:05.084446310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972954" Apr 17 23:36:05.104180 containerd[2002]: time="2026-04-17T23:36:05.104094310Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:05.134428 containerd[2002]: time="2026-04-17T23:36:05.134340035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:05.136589 containerd[2002]: time="2026-04-17T23:36:05.136322056Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 3.743511221s" Apr 17 23:36:05.136589 containerd[2002]: time="2026-04-17T23:36:05.136373118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:36:05.137696 containerd[2002]: time="2026-04-17T23:36:05.137663229Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:36:05.685752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228974082.mount: Deactivated successfully. Apr 17 23:36:06.356482 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:36:07.144949 containerd[2002]: time="2026-04-17T23:36:07.144888981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.146461 containerd[2002]: time="2026-04-17T23:36:07.146385621Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Apr 17 23:36:07.147902 containerd[2002]: time="2026-04-17T23:36:07.147183916Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.150660 containerd[2002]: time="2026-04-17T23:36:07.150613721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.152718 containerd[2002]: time="2026-04-17T23:36:07.152666467Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.014967057s" Apr 17 23:36:07.152718 containerd[2002]: time="2026-04-17T23:36:07.152717297Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:36:07.153765 containerd[2002]: time="2026-04-17T23:36:07.153730181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:36:07.696984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801685681.mount: Deactivated successfully. Apr 17 23:36:07.703212 containerd[2002]: time="2026-04-17T23:36:07.703153944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.704817 containerd[2002]: time="2026-04-17T23:36:07.704755173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 17 23:36:07.706895 containerd[2002]: time="2026-04-17T23:36:07.705773666Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.708547 containerd[2002]: time="2026-04-17T23:36:07.708310257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:07.709229 containerd[2002]: time="2026-04-17T23:36:07.709191562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 555.410153ms" Apr 17 23:36:07.709339 containerd[2002]: time="2026-04-17T23:36:07.709234515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:36:07.709844 containerd[2002]: time="2026-04-17T23:36:07.709801610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:36:08.239508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082521841.mount: Deactivated successfully. Apr 17 23:36:09.436694 containerd[2002]: time="2026-04-17T23:36:09.436630383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.438195 containerd[2002]: time="2026-04-17T23:36:09.438141431Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874817" Apr 17 23:36:09.439549 containerd[2002]: time="2026-04-17T23:36:09.438925685Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.442237 containerd[2002]: time="2026-04-17T23:36:09.442194788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:09.443801 containerd[2002]: time="2026-04-17T23:36:09.443754379Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.733900928s" Apr 17 23:36:09.443922 containerd[2002]: time="2026-04-17T23:36:09.443806999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:36:12.160350 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:12.172644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:12.233504 systemd[1]: Reloading requested from client PID 2715 ('systemctl') (unit session-7.scope)... Apr 17 23:36:12.233529 systemd[1]: Reloading... Apr 17 23:36:12.402385 zram_generator::config[2755]: No configuration found. Apr 17 23:36:12.548974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:12.637692 systemd[1]: Reloading finished in 403 ms. Apr 17 23:36:12.693298 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:36:12.693400 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:36:12.694029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:12.696420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:12.929895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:12.942643 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:36:13.018897 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:36:13.018897 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:13.018897 kubelet[2818]: I0417 23:36:13.018778 2818 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:36:13.432260 kubelet[2818]: I0417 23:36:13.432214 2818 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:36:13.432260 kubelet[2818]: I0417 23:36:13.432248 2818 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:36:13.432454 kubelet[2818]: I0417 23:36:13.432278 2818 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:36:13.432454 kubelet[2818]: I0417 23:36:13.432291 2818 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:36:13.432614 kubelet[2818]: I0417 23:36:13.432590 2818 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:36:13.442519 kubelet[2818]: I0417 23:36:13.441798 2818 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:36:13.444819 kubelet[2818]: E0417 23:36:13.444786 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:36:13.446826 kubelet[2818]: E0417 23:36:13.446785 2818 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:36:13.446970 kubelet[2818]: I0417 23:36:13.446870 2818 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:36:13.454609 kubelet[2818]: I0417 23:36:13.454560 2818 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:36:13.456015 kubelet[2818]: I0417 23:36:13.455797 2818 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:36:13.456255 kubelet[2818]: I0417 23:36:13.455887 2818 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:36:13.456255 kubelet[2818]: I0417 23:36:13.456255 2818 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:36:13.456433 kubelet[2818]: I0417 23:36:13.456271 2818 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:36:13.456433 kubelet[2818]: I0417 23:36:13.456410 2818 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:36:13.458116 kubelet[2818]: I0417 23:36:13.458091 2818 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:13.458302 kubelet[2818]: I0417 23:36:13.458281 2818 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:36:13.458360 kubelet[2818]: I0417 23:36:13.458305 2818 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:36:13.458360 kubelet[2818]: I0417 23:36:13.458334 2818 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:36:13.458360 kubelet[2818]: I0417 23:36:13.458354 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:36:13.461248 kubelet[2818]: E0417 23:36:13.460986 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-7&limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:36:13.461248 kubelet[2818]: E0417 23:36:13.461136 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:36:13.461597 kubelet[2818]: I0417 23:36:13.461578 2818 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:36:13.462876 kubelet[2818]: I0417 23:36:13.462361 2818 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:36:13.462876 kubelet[2818]: I0417 23:36:13.462406 2818 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:36:13.462876 kubelet[2818]: W0417 23:36:13.462467 2818 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:36:13.466475 kubelet[2818]: I0417 23:36:13.466459 2818 server.go:1262] "Started kubelet" Apr 17 23:36:13.471891 kubelet[2818]: I0417 23:36:13.471676 2818 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:36:13.472294 kubelet[2818]: I0417 23:36:13.472261 2818 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:36:13.472446 kubelet[2818]: I0417 23:36:13.472432 2818 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:36:13.472886 kubelet[2818]: I0417 23:36:13.472868 2818 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:36:13.473043 kubelet[2818]: I0417 23:36:13.472937 2818 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:36:13.474567 kubelet[2818]: I0417 23:36:13.474529 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:36:13.482844 kubelet[2818]: E0417 23:36:13.478831 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.7:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-7.18a7491e85f82d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-7,UID:ip-172-31-30-7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-7,},FirstTimestamp:2026-04-17 23:36:13.466430795 +0000 UTC m=+0.495260171,LastTimestamp:2026-04-17 23:36:13.466430795 +0000 UTC m=+0.495260171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-7,}" Apr 17 23:36:13.482844 kubelet[2818]: I0417 23:36:13.482227 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:36:13.485163 kubelet[2818]: I0417 23:36:13.485140 2818 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:36:13.485435 kubelet[2818]: E0417 23:36:13.485415 2818 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-7\" not found" Apr 17 23:36:13.486668 kubelet[2818]: I0417 23:36:13.486647 2818 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:36:13.486782 kubelet[2818]: I0417 23:36:13.486709 2818 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:36:13.489384 kubelet[2818]: E0417 23:36:13.489336 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-7?timeout=10s\": dial tcp 172.31.30.7:6443: connect: connection refused" interval="200ms" Apr 17 23:36:13.490710 kubelet[2818]: E0417 23:36:13.490685 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:13.493312 kubelet[2818]: E0417 23:36:13.493279 2818 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:36:13.496185 kubelet[2818]: I0417 23:36:13.496152 2818 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:36:13.496329 kubelet[2818]: I0417 23:36:13.496319 2818 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:36:13.497007 kubelet[2818]: I0417 23:36:13.496982 2818 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:36:13.521847 kubelet[2818]: I0417 23:36:13.521818 2818 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:36:13.522039 kubelet[2818]: I0417 23:36:13.522022 2818 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:36:13.522168 kubelet[2818]: I0417 23:36:13.522159 2818 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:13.525681 kubelet[2818]: I0417 23:36:13.525648 2818 policy_none.go:49] "None policy: Start" Apr 17 23:36:13.525791 kubelet[2818]: I0417 23:36:13.525692 2818 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:36:13.525791 kubelet[2818]: I0417 23:36:13.525707 2818 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:36:13.530969 kubelet[2818]: I0417 23:36:13.530919 2818 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:36:13.534733 kubelet[2818]: I0417 23:36:13.533516 2818 policy_none.go:47] "Start" Apr 17 23:36:13.534733 kubelet[2818]: I0417 23:36:13.534082 2818 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:36:13.534733 kubelet[2818]: I0417 23:36:13.534105 2818 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:36:13.534733 kubelet[2818]: I0417 23:36:13.534135 2818 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:36:13.534733 kubelet[2818]: E0417 23:36:13.534189 2818 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:36:13.541255 kubelet[2818]: E0417 23:36:13.541184 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:36:13.546910 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:36:13.561695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:36:13.573023 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:36:13.574864 kubelet[2818]: E0417 23:36:13.574813 2818 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:36:13.575724 kubelet[2818]: I0417 23:36:13.575655 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:36:13.575724 kubelet[2818]: I0417 23:36:13.575673 2818 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:36:13.577028 kubelet[2818]: I0417 23:36:13.576133 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:36:13.578337 kubelet[2818]: E0417 23:36:13.578300 2818 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:36:13.578450 kubelet[2818]: E0417 23:36:13.578374 2818 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-7\" not found" Apr 17 23:36:13.654652 systemd[1]: Created slice kubepods-burstable-pod32c6ff144d4f36432e8f820c7b77a4a9.slice - libcontainer container kubepods-burstable-pod32c6ff144d4f36432e8f820c7b77a4a9.slice. Apr 17 23:36:13.681412 kubelet[2818]: I0417 23:36:13.681000 2818 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:13.681412 kubelet[2818]: E0417 23:36:13.681372 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.7:6443/api/v1/nodes\": dial tcp 172.31.30.7:6443: connect: connection refused" node="ip-172-31-30-7" Apr 17 23:36:13.685052 kubelet[2818]: E0417 23:36:13.684941 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:13.688072 kubelet[2818]: I0417 23:36:13.687746 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-ca-certs\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:13.688072 kubelet[2818]: I0417 23:36:13.687780 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:13.688072 kubelet[2818]: I0417 23:36:13.687804 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:13.688072 kubelet[2818]: I0417 23:36:13.687828 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:13.688072 kubelet[2818]: I0417 23:36:13.687882 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:13.688381 kubelet[2818]: I0417 23:36:13.687907 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db3eed8e8ed0dcb13e98db842e0caf66-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-7\" (UID: \"db3eed8e8ed0dcb13e98db842e0caf66\") " pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:13.688381 kubelet[2818]: I0417 23:36:13.687929 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:13.688381 kubelet[2818]: I0417 23:36:13.687954 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:13.688381 kubelet[2818]: I0417 23:36:13.687987 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:13.692042 kubelet[2818]: E0417 23:36:13.691984 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-7?timeout=10s\": dial tcp 172.31.30.7:6443: connect: connection refused" interval="400ms" Apr 17 23:36:13.692553 systemd[1]: Created slice kubepods-burstable-pod788d9376d3516b870c60231c2aab9bc8.slice - libcontainer container kubepods-burstable-pod788d9376d3516b870c60231c2aab9bc8.slice. Apr 17 23:36:13.710385 kubelet[2818]: E0417 23:36:13.710353 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:13.718581 systemd[1]: Created slice kubepods-burstable-poddb3eed8e8ed0dcb13e98db842e0caf66.slice - libcontainer container kubepods-burstable-poddb3eed8e8ed0dcb13e98db842e0caf66.slice. Apr 17 23:36:13.721467 kubelet[2818]: E0417 23:36:13.721418 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:13.883994 kubelet[2818]: I0417 23:36:13.883942 2818 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:13.884421 kubelet[2818]: E0417 23:36:13.884378 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.7:6443/api/v1/nodes\": dial tcp 172.31.30.7:6443: connect: connection refused" node="ip-172-31-30-7" Apr 17 23:36:13.989497 containerd[2002]: time="2026-04-17T23:36:13.989371249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-7,Uid:32c6ff144d4f36432e8f820c7b77a4a9,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:14.015245 containerd[2002]: time="2026-04-17T23:36:14.015179186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-7,Uid:788d9376d3516b870c60231c2aab9bc8,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:14.025083 containerd[2002]: time="2026-04-17T23:36:14.025025832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-7,Uid:db3eed8e8ed0dcb13e98db842e0caf66,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:14.093217 kubelet[2818]: E0417 23:36:14.093157 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-7?timeout=10s\": dial tcp 172.31.30.7:6443: connect: connection refused" interval="800ms" Apr 17 23:36:14.289103 kubelet[2818]: I0417 23:36:14.288562 2818 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:14.289103 kubelet[2818]: E0417 23:36:14.289002 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.7:6443/api/v1/nodes\": dial tcp 172.31.30.7:6443: connect: connection refused" node="ip-172-31-30-7" Apr 17 23:36:14.408639 kubelet[2818]: E0417 23:36:14.408594 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:36:14.549309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540222582.mount: Deactivated successfully. Apr 17 23:36:14.556593 containerd[2002]: time="2026-04-17T23:36:14.556534879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:14.557673 containerd[2002]: time="2026-04-17T23:36:14.557608111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:36:14.559008 containerd[2002]: time="2026-04-17T23:36:14.558965537Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:14.560495 containerd[2002]: time="2026-04-17T23:36:14.560438507Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:14.561330 containerd[2002]: time="2026-04-17T23:36:14.561290409Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:14.562307 containerd[2002]: time="2026-04-17T23:36:14.562254963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:36:14.564433 containerd[2002]: time="2026-04-17T23:36:14.562919208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:36:14.565542 containerd[2002]: time="2026-04-17T23:36:14.565444934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:36:14.567125 containerd[2002]: time="2026-04-17T23:36:14.566829345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.712829ms" Apr 17 23:36:14.569821 containerd[2002]: time="2026-04-17T23:36:14.569773257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 580.312313ms" Apr 17 23:36:14.639935 containerd[2002]: time="2026-04-17T23:36:14.638511990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 623.225354ms" Apr 17 23:36:14.653888 kubelet[2818]: E0417 23:36:14.653788 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:36:14.668921 kubelet[2818]: E0417 23:36:14.665401 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-7&limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:36:14.750063 kubelet[2818]: E0417 23:36:14.750019 2818 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:36:14.894381 kubelet[2818]: E0417 23:36:14.894201 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-7?timeout=10s\": dial tcp 172.31.30.7:6443: connect: connection refused" interval="1.6s" Apr 17 23:36:15.104984 containerd[2002]: time="2026-04-17T23:36:15.102572181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:15.104984 containerd[2002]: time="2026-04-17T23:36:15.102644609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:15.104984 containerd[2002]: time="2026-04-17T23:36:15.102672125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.104984 containerd[2002]: time="2026-04-17T23:36:15.102789605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.106247 kubelet[2818]: I0417 23:36:15.106216 2818 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:15.106620 kubelet[2818]: E0417 23:36:15.106571 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.7:6443/api/v1/nodes\": dial tcp 172.31.30.7:6443: connect: connection refused" node="ip-172-31-30-7" Apr 17 23:36:15.113871 containerd[2002]: time="2026-04-17T23:36:15.113342696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:15.113871 containerd[2002]: time="2026-04-17T23:36:15.113413534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:15.113871 containerd[2002]: time="2026-04-17T23:36:15.113436144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.113871 containerd[2002]: time="2026-04-17T23:36:15.113533487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.130935 containerd[2002]: time="2026-04-17T23:36:15.130753177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:15.131984 containerd[2002]: time="2026-04-17T23:36:15.131899563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:15.131984 containerd[2002]: time="2026-04-17T23:36:15.131943527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.132190 containerd[2002]: time="2026-04-17T23:36:15.132057233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:15.152223 systemd[1]: Started cri-containerd-ed5b58510d6176160f5c4841e3665279ad4e28479c591f9e6779610e506aa688.scope - libcontainer container ed5b58510d6176160f5c4841e3665279ad4e28479c591f9e6779610e506aa688. Apr 17 23:36:15.176524 systemd[1]: Started cri-containerd-33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3.scope - libcontainer container 33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3. Apr 17 23:36:15.188124 systemd[1]: Started cri-containerd-8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b.scope - libcontainer container 8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b. Apr 17 23:36:15.265929 containerd[2002]: time="2026-04-17T23:36:15.265443042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-7,Uid:788d9376d3516b870c60231c2aab9bc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3\"" Apr 17 23:36:15.278624 containerd[2002]: time="2026-04-17T23:36:15.278428607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-7,Uid:32c6ff144d4f36432e8f820c7b77a4a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed5b58510d6176160f5c4841e3665279ad4e28479c591f9e6779610e506aa688\"" Apr 17 23:36:15.287717 containerd[2002]: time="2026-04-17T23:36:15.287619037Z" level=info msg="CreateContainer within sandbox \"ed5b58510d6176160f5c4841e3665279ad4e28479c591f9e6779610e506aa688\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:36:15.288154 containerd[2002]: time="2026-04-17T23:36:15.287650180Z" level=info msg="CreateContainer within sandbox \"33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:36:15.304772 containerd[2002]: time="2026-04-17T23:36:15.304350308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-7,Uid:db3eed8e8ed0dcb13e98db842e0caf66,Namespace:kube-system,Attempt:0,} returns sandbox id \"8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b\"" Apr 17 23:36:15.310212 containerd[2002]: time="2026-04-17T23:36:15.310004391Z" level=info msg="CreateContainer within sandbox \"8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:36:15.375718 containerd[2002]: time="2026-04-17T23:36:15.375658042Z" level=info msg="CreateContainer within sandbox \"8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d\"" Apr 17 23:36:15.377488 containerd[2002]: time="2026-04-17T23:36:15.377441547Z" level=info msg="CreateContainer within sandbox \"ed5b58510d6176160f5c4841e3665279ad4e28479c591f9e6779610e506aa688\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ca3e05b8c24f8be48e3e714038748a5bc6c57b42993f5640e189cddf779db7f\"" Apr 17 23:36:15.377776 containerd[2002]: time="2026-04-17T23:36:15.377740934Z" level=info msg="StartContainer for \"5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d\"" Apr 17 23:36:15.380904 containerd[2002]: time="2026-04-17T23:36:15.380029651Z" level=info msg="CreateContainer within sandbox \"33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a\"" Apr 17 23:36:15.380904 containerd[2002]: time="2026-04-17T23:36:15.380283361Z" level=info msg="StartContainer for \"7ca3e05b8c24f8be48e3e714038748a5bc6c57b42993f5640e189cddf779db7f\"" Apr 17 23:36:15.394134 containerd[2002]: time="2026-04-17T23:36:15.394057130Z" level=info msg="StartContainer for \"090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a\"" Apr 17 23:36:15.439759 systemd[1]: Started cri-containerd-5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d.scope - libcontainer container 5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d. Apr 17 23:36:15.450977 systemd[1]: Started cri-containerd-7ca3e05b8c24f8be48e3e714038748a5bc6c57b42993f5640e189cddf779db7f.scope - libcontainer container 7ca3e05b8c24f8be48e3e714038748a5bc6c57b42993f5640e189cddf779db7f. Apr 17 23:36:15.468833 systemd[1]: Started cri-containerd-090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a.scope - libcontainer container 090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a. Apr 17 23:36:15.557986 kubelet[2818]: E0417 23:36:15.557438 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:36:15.588662 containerd[2002]: time="2026-04-17T23:36:15.588622897Z" level=info msg="StartContainer for \"7ca3e05b8c24f8be48e3e714038748a5bc6c57b42993f5640e189cddf779db7f\" returns successfully" Apr 17 23:36:15.594671 containerd[2002]: time="2026-04-17T23:36:15.594633601Z" level=info msg="StartContainer for \"5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d\" returns successfully" Apr 17 23:36:15.594958 containerd[2002]: time="2026-04-17T23:36:15.594821071Z" level=info msg="StartContainer for \"090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a\" returns successfully" Apr 17 23:36:16.579782 kubelet[2818]: E0417 23:36:16.579236 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:16.583552 kubelet[2818]: E0417 23:36:16.583311 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:16.588693 kubelet[2818]: E0417 23:36:16.588430 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:16.710577 kubelet[2818]: I0417 23:36:16.710271 2818 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:17.592440 kubelet[2818]: E0417 23:36:17.592217 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:17.593006 kubelet[2818]: E0417 23:36:17.592690 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:17.594213 kubelet[2818]: E0417 23:36:17.593970 2818 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:17.715123 kubelet[2818]: E0417 23:36:17.715084 2818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-7\" not found" node="ip-172-31-30-7" Apr 17 23:36:17.806126 kubelet[2818]: I0417 23:36:17.806087 2818 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-7" Apr 17 23:36:17.889097 kubelet[2818]: I0417 23:36:17.887197 2818 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:17.897067 kubelet[2818]: E0417 23:36:17.897015 2818 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:17.897578 kubelet[2818]: I0417 23:36:17.897337 2818 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:17.901216 kubelet[2818]: E0417 23:36:17.901178 2818 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:17.901216 kubelet[2818]: I0417 23:36:17.901215 2818 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:17.903356 kubelet[2818]: E0417 23:36:17.903321 2818 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:18.461069 kubelet[2818]: I0417 23:36:18.461003 2818 apiserver.go:52] "Watching apiserver" Apr 17 23:36:18.487521 kubelet[2818]: I0417 23:36:18.487472 2818 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:36:18.593549 kubelet[2818]: I0417 23:36:18.593338 2818 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:18.593549 kubelet[2818]: I0417 23:36:18.593450 2818 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:19.938710 systemd[1]: Reloading requested from client PID 3102 ('systemctl') (unit session-7.scope)... Apr 17 23:36:19.938734 systemd[1]: Reloading... Apr 17 23:36:20.070897 zram_generator::config[3145]: No configuration found. Apr 17 23:36:20.202546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:20.307085 systemd[1]: Reloading finished in 367 ms. Apr 17 23:36:20.353915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:20.370966 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:36:20.371417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:20.378327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:20.670182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:20.683550 (kubelet)[3202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:36:20.766878 kubelet[3202]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:36:20.766878 kubelet[3202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:36:20.766878 kubelet[3202]: I0417 23:36:20.766169 3202 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:36:20.777012 kubelet[3202]: I0417 23:36:20.776981 3202 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:36:20.777174 kubelet[3202]: I0417 23:36:20.777164 3202 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:36:20.777241 kubelet[3202]: I0417 23:36:20.777234 3202 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:36:20.777282 kubelet[3202]: I0417 23:36:20.777276 3202 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:36:20.777606 kubelet[3202]: I0417 23:36:20.777592 3202 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:36:20.779070 kubelet[3202]: I0417 23:36:20.779049 3202 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:36:20.783138 kubelet[3202]: I0417 23:36:20.783100 3202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:36:20.787490 kubelet[3202]: E0417 23:36:20.787015 3202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:36:20.787490 kubelet[3202]: I0417 23:36:20.787084 3202 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:36:20.791980 kubelet[3202]: I0417 23:36:20.791946 3202 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:36:20.793246 kubelet[3202]: I0417 23:36:20.793205 3202 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:36:20.794158 kubelet[3202]: I0417 23:36:20.793720 3202 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:36:20.794356 kubelet[3202]: I0417 23:36:20.794343 3202 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:36:20.794569 kubelet[3202]: I0417 23:36:20.794405 3202 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:36:20.794569 kubelet[3202]: I0417 23:36:20.794434 3202 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:36:20.794734 kubelet[3202]: I0417 23:36:20.794725 3202 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:20.795022 kubelet[3202]: I0417 23:36:20.795003 3202 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:36:20.795126 kubelet[3202]: I0417 23:36:20.795115 3202 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:36:20.795206 kubelet[3202]: I0417 23:36:20.795198 3202 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:36:20.795750 kubelet[3202]: I0417 23:36:20.795378 3202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:36:20.802030 kubelet[3202]: I0417 23:36:20.802003 3202 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:36:20.803223 kubelet[3202]: I0417 23:36:20.802934 3202 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:36:20.804013 kubelet[3202]: I0417 23:36:20.803392 3202 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:36:20.809119 kubelet[3202]: I0417 23:36:20.809097 3202 server.go:1262] "Started kubelet" Apr 17 23:36:20.813266 kubelet[3202]: I0417 23:36:20.812642 3202 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:36:20.813266 kubelet[3202]: I0417 23:36:20.812713 3202 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:36:20.813266 kubelet[3202]: I0417 23:36:20.812755 3202 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:36:20.813266 kubelet[3202]: I0417 23:36:20.813105 3202 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:36:20.816546 kubelet[3202]: I0417 23:36:20.816515 3202 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:36:20.829399 kubelet[3202]: I0417 23:36:20.829367 3202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:36:20.830917 kubelet[3202]: I0417 23:36:20.830649 3202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:36:20.834663 kubelet[3202]: I0417 23:36:20.834638 3202 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:36:20.834969 kubelet[3202]: I0417 23:36:20.834954 3202 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:36:20.835204 kubelet[3202]: I0417 23:36:20.835184 3202 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:36:20.837340 kubelet[3202]: E0417 23:36:20.837313 3202 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:36:20.839751 kubelet[3202]: I0417 23:36:20.839727 3202 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:36:20.840199 kubelet[3202]: I0417 23:36:20.840155 3202 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:36:20.848449 kubelet[3202]: I0417 23:36:20.848402 3202 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:36:20.865245 kubelet[3202]: I0417 23:36:20.864774 3202 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:36:20.875340 kubelet[3202]: I0417 23:36:20.875203 3202 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:36:20.875340 kubelet[3202]: I0417 23:36:20.875256 3202 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:36:20.875340 kubelet[3202]: I0417 23:36:20.875284 3202 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:36:20.875340 kubelet[3202]: E0417 23:36:20.875340 3202 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:36:20.927303 kubelet[3202]: I0417 23:36:20.927202 3202 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:36:20.929395 kubelet[3202]: I0417 23:36:20.928686 3202 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:36:20.929395 kubelet[3202]: I0417 23:36:20.928740 3202 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:36:20.929395 kubelet[3202]: I0417 23:36:20.928949 3202 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:36:20.930210 kubelet[3202]: I0417 23:36:20.928965 3202 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:36:20.930620 kubelet[3202]: I0417 23:36:20.930558 3202 policy_none.go:49] "None policy: Start" Apr 17 23:36:20.930821 kubelet[3202]: I0417 23:36:20.930754 3202 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:36:20.932918 kubelet[3202]: I0417 23:36:20.931523 3202 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:36:20.932918 kubelet[3202]: I0417 23:36:20.931717 3202 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:36:20.932918 kubelet[3202]: I0417 23:36:20.931730 3202 policy_none.go:47] "Start" Apr 17 23:36:20.940615 kubelet[3202]: E0417 23:36:20.940575 3202 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:36:20.940887 kubelet[3202]: I0417 23:36:20.940829 3202 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:36:20.940985 kubelet[3202]: I0417 23:36:20.940876 3202 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:36:20.943701 kubelet[3202]: I0417 23:36:20.943676 3202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:36:20.947965 kubelet[3202]: E0417 23:36:20.946365 3202 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:36:20.977406 kubelet[3202]: I0417 23:36:20.977187 3202 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:20.977406 kubelet[3202]: I0417 23:36:20.977407 3202 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:20.982554 kubelet[3202]: I0417 23:36:20.982198 3202 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:20.992160 kubelet[3202]: E0417 23:36:20.992108 3202 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-7\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:21.009000 kubelet[3202]: E0417 23:36:21.008657 3202 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-7\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:21.037489 kubelet[3202]: I0417 23:36:21.037243 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:21.037489 kubelet[3202]: I0417 23:36:21.037323 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db3eed8e8ed0dcb13e98db842e0caf66-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-7\" (UID: \"db3eed8e8ed0dcb13e98db842e0caf66\") " pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:21.037489 kubelet[3202]: I0417 23:36:21.037419 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:21.037489 kubelet[3202]: I0417 23:36:21.037484 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:21.037822 kubelet[3202]: I0417 23:36:21.037561 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:21.037822 kubelet[3202]: I0417 23:36:21.037586 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:21.037822 kubelet[3202]: I0417 23:36:21.037649 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32c6ff144d4f36432e8f820c7b77a4a9-ca-certs\") pod \"kube-apiserver-ip-172-31-30-7\" (UID: \"32c6ff144d4f36432e8f820c7b77a4a9\") " pod="kube-system/kube-apiserver-ip-172-31-30-7" Apr 17 23:36:21.037822 kubelet[3202]: I0417 23:36:21.037714 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:21.037822 kubelet[3202]: I0417 23:36:21.037781 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/788d9376d3516b870c60231c2aab9bc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-7\" (UID: \"788d9376d3516b870c60231c2aab9bc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-7" Apr 17 23:36:21.048443 kubelet[3202]: I0417 23:36:21.048406 3202 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-7" Apr 17 23:36:21.062082 kubelet[3202]: I0417 23:36:21.062035 3202 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-7" Apr 17 23:36:21.062247 kubelet[3202]: I0417 23:36:21.062141 3202 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-7" Apr 17 23:36:21.095018 update_engine[1965]: I20260417 23:36:21.094918 1965 update_attempter.cc:509] Updating boot flags... Apr 17 23:36:21.178885 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3259) Apr 17 23:36:21.437889 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3258) Apr 17 23:36:21.805989 kubelet[3202]: I0417 23:36:21.805637 3202 apiserver.go:52] "Watching apiserver" Apr 17 23:36:21.835500 kubelet[3202]: I0417 23:36:21.835464 3202 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:36:21.907501 kubelet[3202]: I0417 23:36:21.907439 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-7" podStartSLOduration=3.907420997 podStartE2EDuration="3.907420997s" podCreationTimestamp="2026-04-17 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:21.902676837 +0000 UTC m=+1.210142698" watchObservedRunningTime="2026-04-17 23:36:21.907420997 +0000 UTC m=+1.214886838" Apr 17 23:36:21.909028 kubelet[3202]: I0417 23:36:21.908662 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-7" podStartSLOduration=3.908641278 podStartE2EDuration="3.908641278s" podCreationTimestamp="2026-04-17 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:21.889217906 +0000 UTC m=+1.196683770" watchObservedRunningTime="2026-04-17 23:36:21.908641278 +0000 UTC m=+1.216107141" Apr 17 23:36:21.909028 kubelet[3202]: I0417 23:36:21.908992 3202 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:21.934593 kubelet[3202]: E0417 23:36:21.932917 3202 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-7\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-7" Apr 17 23:36:21.988527 kubelet[3202]: I0417 23:36:21.988041 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-7" podStartSLOduration=1.9880196049999999 podStartE2EDuration="1.988019605s" podCreationTimestamp="2026-04-17 23:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:21.921748398 +0000 UTC m=+1.229214260" watchObservedRunningTime="2026-04-17 23:36:21.988019605 +0000 UTC m=+1.295485467" Apr 17 23:36:26.979060 kubelet[3202]: I0417 23:36:26.977848 3202 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:36:26.979711 containerd[2002]: time="2026-04-17T23:36:26.978929423Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:36:26.981177 kubelet[3202]: I0417 23:36:26.981140 3202 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:36:27.989915 systemd[1]: Created slice kubepods-besteffort-poddb617dfc_5ddc_473b_b2cb_5d41d6cd0063.slice - libcontainer container kubepods-besteffort-poddb617dfc_5ddc_473b_b2cb_5d41d6cd0063.slice. Apr 17 23:36:27.994261 kubelet[3202]: I0417 23:36:27.994225 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db617dfc-5ddc-473b-b2cb-5d41d6cd0063-xtables-lock\") pod \"kube-proxy-f524f\" (UID: \"db617dfc-5ddc-473b-b2cb-5d41d6cd0063\") " pod="kube-system/kube-proxy-f524f" Apr 17 23:36:27.994987 kubelet[3202]: I0417 23:36:27.994268 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db617dfc-5ddc-473b-b2cb-5d41d6cd0063-lib-modules\") pod \"kube-proxy-f524f\" (UID: \"db617dfc-5ddc-473b-b2cb-5d41d6cd0063\") " pod="kube-system/kube-proxy-f524f" Apr 17 23:36:27.994987 kubelet[3202]: I0417 23:36:27.994294 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wfs\" (UniqueName: \"kubernetes.io/projected/db617dfc-5ddc-473b-b2cb-5d41d6cd0063-kube-api-access-v7wfs\") pod \"kube-proxy-f524f\" (UID: \"db617dfc-5ddc-473b-b2cb-5d41d6cd0063\") " pod="kube-system/kube-proxy-f524f" Apr 17 23:36:27.994987 kubelet[3202]: I0417 23:36:27.994364 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db617dfc-5ddc-473b-b2cb-5d41d6cd0063-kube-proxy\") pod \"kube-proxy-f524f\" (UID: \"db617dfc-5ddc-473b-b2cb-5d41d6cd0063\") " pod="kube-system/kube-proxy-f524f" Apr 17 23:36:28.217834 systemd[1]: Created slice kubepods-besteffort-podff431991_f4ee_433f_b808_1690e4ec50cb.slice - libcontainer container kubepods-besteffort-podff431991_f4ee_433f_b808_1690e4ec50cb.slice. Apr 17 23:36:28.299615 kubelet[3202]: I0417 23:36:28.299149 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff431991-f4ee-433f-b808-1690e4ec50cb-var-lib-calico\") pod \"tigera-operator-5588576f44-86djg\" (UID: \"ff431991-f4ee-433f-b808-1690e4ec50cb\") " pod="tigera-operator/tigera-operator-5588576f44-86djg" Apr 17 23:36:28.299615 kubelet[3202]: I0417 23:36:28.299306 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csz2f\" (UniqueName: \"kubernetes.io/projected/ff431991-f4ee-433f-b808-1690e4ec50cb-kube-api-access-csz2f\") pod \"tigera-operator-5588576f44-86djg\" (UID: \"ff431991-f4ee-433f-b808-1690e4ec50cb\") " pod="tigera-operator/tigera-operator-5588576f44-86djg" Apr 17 23:36:28.302832 containerd[2002]: time="2026-04-17T23:36:28.302786518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f524f,Uid:db617dfc-5ddc-473b-b2cb-5d41d6cd0063,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:28.338625 containerd[2002]: time="2026-04-17T23:36:28.337836815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.338625 containerd[2002]: time="2026-04-17T23:36:28.338222224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.338625 containerd[2002]: time="2026-04-17T23:36:28.338256770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.341714 containerd[2002]: time="2026-04-17T23:36:28.339364660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.364802 systemd[1]: run-containerd-runc-k8s.io-93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37-runc.xWgeOJ.mount: Deactivated successfully. Apr 17 23:36:28.374154 systemd[1]: Started cri-containerd-93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37.scope - libcontainer container 93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37. Apr 17 23:36:28.403633 containerd[2002]: time="2026-04-17T23:36:28.403559967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f524f,Uid:db617dfc-5ddc-473b-b2cb-5d41d6cd0063,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37\"" Apr 17 23:36:28.421272 containerd[2002]: time="2026-04-17T23:36:28.421211951Z" level=info msg="CreateContainer within sandbox \"93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:36:28.446811 containerd[2002]: time="2026-04-17T23:36:28.446627882Z" level=info msg="CreateContainer within sandbox \"93e95ad1a747d00ad8a40575cb4628e6cf85b8c85068f72f14b68b9aef99cd37\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99b5f9bb630fbd8b74c12c5464e5c129721ec8ea1f4aeb70ad072ff3e1e588b6\"" Apr 17 23:36:28.448055 containerd[2002]: time="2026-04-17T23:36:28.448019652Z" level=info msg="StartContainer for \"99b5f9bb630fbd8b74c12c5464e5c129721ec8ea1f4aeb70ad072ff3e1e588b6\"" Apr 17 23:36:28.485140 systemd[1]: Started cri-containerd-99b5f9bb630fbd8b74c12c5464e5c129721ec8ea1f4aeb70ad072ff3e1e588b6.scope - libcontainer container 99b5f9bb630fbd8b74c12c5464e5c129721ec8ea1f4aeb70ad072ff3e1e588b6. Apr 17 23:36:28.519365 containerd[2002]: time="2026-04-17T23:36:28.519305934Z" level=info msg="StartContainer for \"99b5f9bb630fbd8b74c12c5464e5c129721ec8ea1f4aeb70ad072ff3e1e588b6\" returns successfully" Apr 17 23:36:28.527030 containerd[2002]: time="2026-04-17T23:36:28.526989882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-86djg,Uid:ff431991-f4ee-433f-b808-1690e4ec50cb,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:36:28.569666 containerd[2002]: time="2026-04-17T23:36:28.569168367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.569666 containerd[2002]: time="2026-04-17T23:36:28.569252509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.569666 containerd[2002]: time="2026-04-17T23:36:28.569274434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.569666 containerd[2002]: time="2026-04-17T23:36:28.569393375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.601086 systemd[1]: Started cri-containerd-d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf.scope - libcontainer container d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf. Apr 17 23:36:28.679012 containerd[2002]: time="2026-04-17T23:36:28.678952544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-86djg,Uid:ff431991-f4ee-433f-b808-1690e4ec50cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf\"" Apr 17 23:36:28.681648 containerd[2002]: time="2026-04-17T23:36:28.681597236Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:36:29.769602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649565444.mount: Deactivated successfully. Apr 17 23:36:31.794653 containerd[2002]: time="2026-04-17T23:36:31.794593436Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.800192 containerd[2002]: time="2026-04-17T23:36:31.799895558Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:36:31.805888 containerd[2002]: time="2026-04-17T23:36:31.805775201Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.811694 containerd[2002]: time="2026-04-17T23:36:31.811623731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.813243 containerd[2002]: time="2026-04-17T23:36:31.812745922Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.131100242s" Apr 17 23:36:31.813243 containerd[2002]: time="2026-04-17T23:36:31.812794452Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:36:31.822578 containerd[2002]: time="2026-04-17T23:36:31.822534834Z" level=info msg="CreateContainer within sandbox \"d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:36:31.852179 containerd[2002]: time="2026-04-17T23:36:31.851808088Z" level=info msg="CreateContainer within sandbox \"d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa\"" Apr 17 23:36:31.853555 containerd[2002]: time="2026-04-17T23:36:31.852976221Z" level=info msg="StartContainer for \"584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa\"" Apr 17 23:36:31.897080 systemd[1]: Started cri-containerd-584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa.scope - libcontainer container 584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa. Apr 17 23:36:31.945272 containerd[2002]: time="2026-04-17T23:36:31.944357509Z" level=info msg="StartContainer for \"584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa\" returns successfully" Apr 17 23:36:32.824099 kubelet[3202]: I0417 23:36:32.823989 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f524f" podStartSLOduration=5.823968236 podStartE2EDuration="5.823968236s" podCreationTimestamp="2026-04-17 23:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:28.949616629 +0000 UTC m=+8.257082492" watchObservedRunningTime="2026-04-17 23:36:32.823968236 +0000 UTC m=+12.131434079" Apr 17 23:36:37.460347 sudo[2305]: pam_unix(sudo:session): session closed for user root Apr 17 23:36:37.625601 sshd[2302]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:37.633516 systemd[1]: sshd@6-172.31.30.7:22-20.229.252.112:40544.service: Deactivated successfully. Apr 17 23:36:37.637057 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:36:37.637432 systemd[1]: session-7.scope: Consumed 5.331s CPU time, 155.6M memory peak, 0B memory swap peak. Apr 17 23:36:37.638746 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:36:37.643936 systemd-logind[1963]: Removed session 7. Apr 17 23:36:42.121124 kubelet[3202]: I0417 23:36:42.121041 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-86djg" podStartSLOduration=10.987990639 podStartE2EDuration="14.121020315s" podCreationTimestamp="2026-04-17 23:36:28 +0000 UTC" firstStartedPulling="2026-04-17 23:36:28.680836645 +0000 UTC m=+7.988302487" lastFinishedPulling="2026-04-17 23:36:31.81386631 +0000 UTC m=+11.121332163" observedRunningTime="2026-04-17 23:36:32.963860948 +0000 UTC m=+12.271326802" watchObservedRunningTime="2026-04-17 23:36:42.121020315 +0000 UTC m=+21.428486176" Apr 17 23:36:42.142438 systemd[1]: Created slice kubepods-besteffort-podd13d784e_c764_4f05_99d0_f9cffbf1cd82.slice - libcontainer container kubepods-besteffort-podd13d784e_c764_4f05_99d0_f9cffbf1cd82.slice. Apr 17 23:36:42.180509 kubelet[3202]: I0417 23:36:42.180452 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhql\" (UniqueName: \"kubernetes.io/projected/d13d784e-c764-4f05-99d0-f9cffbf1cd82-kube-api-access-7fhql\") pod \"calico-typha-66d74b4486-wv8nk\" (UID: \"d13d784e-c764-4f05-99d0-f9cffbf1cd82\") " pod="calico-system/calico-typha-66d74b4486-wv8nk" Apr 17 23:36:42.180509 kubelet[3202]: I0417 23:36:42.180516 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13d784e-c764-4f05-99d0-f9cffbf1cd82-tigera-ca-bundle\") pod \"calico-typha-66d74b4486-wv8nk\" (UID: \"d13d784e-c764-4f05-99d0-f9cffbf1cd82\") " pod="calico-system/calico-typha-66d74b4486-wv8nk" Apr 17 23:36:42.180750 kubelet[3202]: I0417 23:36:42.180542 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d13d784e-c764-4f05-99d0-f9cffbf1cd82-typha-certs\") pod \"calico-typha-66d74b4486-wv8nk\" (UID: \"d13d784e-c764-4f05-99d0-f9cffbf1cd82\") " pod="calico-system/calico-typha-66d74b4486-wv8nk" Apr 17 23:36:42.281055 kubelet[3202]: I0417 23:36:42.281023 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-lib-modules\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281208 kubelet[3202]: I0417 23:36:42.281087 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-cni-net-dir\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281208 kubelet[3202]: I0417 23:36:42.281110 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-sys-fs\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281208 kubelet[3202]: I0417 23:36:42.281131 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-xtables-lock\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281208 kubelet[3202]: I0417 23:36:42.281154 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-node-certs\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281208 kubelet[3202]: I0417 23:36:42.281205 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-bpffs\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281469 kubelet[3202]: I0417 23:36:42.281228 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-policysync\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281469 kubelet[3202]: I0417 23:36:42.281253 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-tigera-ca-bundle\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281469 kubelet[3202]: I0417 23:36:42.281281 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-flexvol-driver-host\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281469 kubelet[3202]: I0417 23:36:42.281304 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-nodeproc\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281469 kubelet[3202]: I0417 23:36:42.281328 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-var-run-calico\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281666 kubelet[3202]: I0417 23:36:42.281350 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzhm\" (UniqueName: \"kubernetes.io/projected/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-kube-api-access-gkzhm\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281666 kubelet[3202]: I0417 23:36:42.281375 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-cni-bin-dir\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281666 kubelet[3202]: I0417 23:36:42.281408 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-cni-log-dir\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.281666 kubelet[3202]: I0417 23:36:42.281434 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ec281b8-d680-4c8e-9293-9cfd593f8d4b-var-lib-calico\") pod \"calico-node-8q4h8\" (UID: \"4ec281b8-d680-4c8e-9293-9cfd593f8d4b\") " pod="calico-system/calico-node-8q4h8" Apr 17 23:36:42.287485 systemd[1]: Created slice kubepods-besteffort-pod4ec281b8_d680_4c8e_9293_9cfd593f8d4b.slice - libcontainer container kubepods-besteffort-pod4ec281b8_d680_4c8e_9293_9cfd593f8d4b.slice. Apr 17 23:36:42.394995 kubelet[3202]: E0417 23:36:42.393844 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.394995 kubelet[3202]: W0417 23:36:42.394932 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.394995 kubelet[3202]: E0417 23:36:42.394961 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.406625 kubelet[3202]: E0417 23:36:42.406591 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.406625 kubelet[3202]: W0417 23:36:42.406621 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.406811 kubelet[3202]: E0417 23:36:42.406646 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.448677 kubelet[3202]: E0417 23:36:42.448169 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:42.450162 containerd[2002]: time="2026-04-17T23:36:42.450119318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d74b4486-wv8nk,Uid:d13d784e-c764-4f05-99d0-f9cffbf1cd82,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:42.482329 kubelet[3202]: E0417 23:36:42.482232 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.482329 kubelet[3202]: W0417 23:36:42.482265 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.482329 kubelet[3202]: E0417 23:36:42.482291 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.482568 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.483982 kubelet[3202]: W0417 23:36:42.482581 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.482596 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.482969 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.483982 kubelet[3202]: W0417 23:36:42.482982 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.482996 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.483682 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.483982 kubelet[3202]: W0417 23:36:42.483694 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.483982 kubelet[3202]: E0417 23:36:42.483709 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.484762 kubelet[3202]: E0417 23:36:42.484182 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.484762 kubelet[3202]: W0417 23:36:42.484194 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.484762 kubelet[3202]: E0417 23:36:42.484209 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.484762 kubelet[3202]: E0417 23:36:42.484616 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.484762 kubelet[3202]: W0417 23:36:42.484628 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.484762 kubelet[3202]: E0417 23:36:42.484642 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.488087 kubelet[3202]: E0417 23:36:42.488061 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.488087 kubelet[3202]: W0417 23:36:42.488085 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.488255 kubelet[3202]: E0417 23:36:42.488106 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.488747 kubelet[3202]: E0417 23:36:42.488725 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.488747 kubelet[3202]: W0417 23:36:42.488746 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.489087 kubelet[3202]: E0417 23:36:42.488763 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.489655 kubelet[3202]: E0417 23:36:42.489504 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.489655 kubelet[3202]: W0417 23:36:42.489529 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.489655 kubelet[3202]: E0417 23:36:42.489545 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.490347 kubelet[3202]: I0417 23:36:42.490099 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd62a40-a858-425f-a3e8-4e85787fe5f7-kubelet-dir\") pod \"csi-node-driver-tttlp\" (UID: \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\") " pod="calico-system/csi-node-driver-tttlp" Apr 17 23:36:42.490621 kubelet[3202]: E0417 23:36:42.490489 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.490621 kubelet[3202]: W0417 23:36:42.490502 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.490621 kubelet[3202]: E0417 23:36:42.490516 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.491338 kubelet[3202]: E0417 23:36:42.491267 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.491338 kubelet[3202]: W0417 23:36:42.491286 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.491338 kubelet[3202]: E0417 23:36:42.491302 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.492334 kubelet[3202]: E0417 23:36:42.492011 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.492334 kubelet[3202]: W0417 23:36:42.492027 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.492334 kubelet[3202]: E0417 23:36:42.492041 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.492334 kubelet[3202]: I0417 23:36:42.492071 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdd62a40-a858-425f-a3e8-4e85787fe5f7-registration-dir\") pod \"csi-node-driver-tttlp\" (UID: \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\") " pod="calico-system/csi-node-driver-tttlp" Apr 17 23:36:42.492746 kubelet[3202]: E0417 23:36:42.492731 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.492873 kubelet[3202]: W0417 23:36:42.492822 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.492873 kubelet[3202]: E0417 23:36:42.492840 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.493371 kubelet[3202]: E0417 23:36:42.493282 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.493371 kubelet[3202]: W0417 23:36:42.493295 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.493371 kubelet[3202]: E0417 23:36:42.493308 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.493836 kubelet[3202]: E0417 23:36:42.493732 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.493836 kubelet[3202]: W0417 23:36:42.493745 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.493836 kubelet[3202]: E0417 23:36:42.493758 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.494504 kubelet[3202]: E0417 23:36:42.494337 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.494504 kubelet[3202]: W0417 23:36:42.494352 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.494504 kubelet[3202]: E0417 23:36:42.494365 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.494915 kubelet[3202]: E0417 23:36:42.494902 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.495121 kubelet[3202]: W0417 23:36:42.495016 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.495121 kubelet[3202]: E0417 23:36:42.495034 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.495532 kubelet[3202]: E0417 23:36:42.495413 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.495532 kubelet[3202]: W0417 23:36:42.495427 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.495532 kubelet[3202]: E0417 23:36:42.495440 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.495936 kubelet[3202]: E0417 23:36:42.495829 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.495936 kubelet[3202]: W0417 23:36:42.495840 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.495936 kubelet[3202]: E0417 23:36:42.495890 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.496356 kubelet[3202]: E0417 23:36:42.496285 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.496356 kubelet[3202]: W0417 23:36:42.496298 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.496356 kubelet[3202]: E0417 23:36:42.496313 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.496834 kubelet[3202]: E0417 23:36:42.496724 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.496834 kubelet[3202]: W0417 23:36:42.496738 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.496834 kubelet[3202]: E0417 23:36:42.496751 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.497305 kubelet[3202]: E0417 23:36:42.497177 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.497305 kubelet[3202]: W0417 23:36:42.497189 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.497305 kubelet[3202]: E0417 23:36:42.497202 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.497678 kubelet[3202]: E0417 23:36:42.497595 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.497678 kubelet[3202]: W0417 23:36:42.497610 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.497678 kubelet[3202]: E0417 23:36:42.497624 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.498156 kubelet[3202]: E0417 23:36:42.498049 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.498156 kubelet[3202]: W0417 23:36:42.498061 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.498156 kubelet[3202]: E0417 23:36:42.498074 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.498644 kubelet[3202]: E0417 23:36:42.498517 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.498644 kubelet[3202]: W0417 23:36:42.498552 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.498644 kubelet[3202]: E0417 23:36:42.498566 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.499159 kubelet[3202]: E0417 23:36:42.499147 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.499326 kubelet[3202]: W0417 23:36:42.499226 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.499326 kubelet[3202]: E0417 23:36:42.499242 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.542707 containerd[2002]: time="2026-04-17T23:36:42.542436881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:42.542707 containerd[2002]: time="2026-04-17T23:36:42.542540240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:42.542707 containerd[2002]: time="2026-04-17T23:36:42.542562396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:42.545261 containerd[2002]: time="2026-04-17T23:36:42.544668275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:42.594640 kubelet[3202]: E0417 23:36:42.594593 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.594640 kubelet[3202]: W0417 23:36:42.594624 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.594886 kubelet[3202]: E0417 23:36:42.594649 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.595209 kubelet[3202]: E0417 23:36:42.595036 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.595209 kubelet[3202]: W0417 23:36:42.595055 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.595209 kubelet[3202]: E0417 23:36:42.595073 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.595419 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.596089 kubelet[3202]: W0417 23:36:42.595430 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.595445 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.595714 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.596089 kubelet[3202]: W0417 23:36:42.595725 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.595738 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.596008 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.596089 kubelet[3202]: W0417 23:36:42.596019 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.596089 kubelet[3202]: E0417 23:36:42.596032 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.598085 kubelet[3202]: E0417 23:36:42.597019 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.598085 kubelet[3202]: W0417 23:36:42.597031 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.598085 kubelet[3202]: E0417 23:36:42.597046 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.598085 kubelet[3202]: E0417 23:36:42.597307 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.598085 kubelet[3202]: W0417 23:36:42.597318 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.598085 kubelet[3202]: E0417 23:36:42.597331 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.598085 kubelet[3202]: I0417 23:36:42.597449 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdd62a40-a858-425f-a3e8-4e85787fe5f7-varrun\") pod \"csi-node-driver-tttlp\" (UID: \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\") " pod="calico-system/csi-node-driver-tttlp" Apr 17 23:36:42.598085 kubelet[3202]: E0417 23:36:42.597629 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.598085 kubelet[3202]: W0417 23:36:42.597643 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.599377 kubelet[3202]: E0417 23:36:42.597655 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.599377 kubelet[3202]: E0417 23:36:42.598010 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.599377 kubelet[3202]: W0417 23:36:42.598022 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.599377 kubelet[3202]: E0417 23:36:42.598037 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.599982 kubelet[3202]: E0417 23:36:42.599458 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.599982 kubelet[3202]: W0417 23:36:42.599470 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.599982 kubelet[3202]: E0417 23:36:42.599484 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.599982 kubelet[3202]: E0417 23:36:42.599774 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.599982 kubelet[3202]: W0417 23:36:42.599784 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.599982 kubelet[3202]: E0417 23:36:42.599797 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.599982 kubelet[3202]: I0417 23:36:42.599967 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjh7m\" (UniqueName: \"kubernetes.io/projected/cdd62a40-a858-425f-a3e8-4e85787fe5f7-kube-api-access-bjh7m\") pod \"csi-node-driver-tttlp\" (UID: \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\") " pod="calico-system/csi-node-driver-tttlp" Apr 17 23:36:42.600943 kubelet[3202]: E0417 23:36:42.600143 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.600943 kubelet[3202]: W0417 23:36:42.600153 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.600943 kubelet[3202]: E0417 23:36:42.600166 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.600943 kubelet[3202]: E0417 23:36:42.600451 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.600943 kubelet[3202]: W0417 23:36:42.600463 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.600943 kubelet[3202]: E0417 23:36:42.600477 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.601629 kubelet[3202]: E0417 23:36:42.601601 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.601629 kubelet[3202]: W0417 23:36:42.601615 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.601629 kubelet[3202]: E0417 23:36:42.601630 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.603077 kubelet[3202]: I0417 23:36:42.602808 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdd62a40-a858-425f-a3e8-4e85787fe5f7-socket-dir\") pod \"csi-node-driver-tttlp\" (UID: \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\") " pod="calico-system/csi-node-driver-tttlp" Apr 17 23:36:42.603162 kubelet[3202]: E0417 23:36:42.603128 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.603162 kubelet[3202]: W0417 23:36:42.603142 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.603162 kubelet[3202]: E0417 23:36:42.603157 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.603582 kubelet[3202]: E0417 23:36:42.603403 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.603582 kubelet[3202]: W0417 23:36:42.603416 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.603582 kubelet[3202]: E0417 23:36:42.603429 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.604043 kubelet[3202]: E0417 23:36:42.603756 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.604043 kubelet[3202]: W0417 23:36:42.603769 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.604043 kubelet[3202]: E0417 23:36:42.603782 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.604896 kubelet[3202]: E0417 23:36:42.604693 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.604896 kubelet[3202]: W0417 23:36:42.604708 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.604896 kubelet[3202]: E0417 23:36:42.604724 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.605078 kubelet[3202]: E0417 23:36:42.605040 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.605078 kubelet[3202]: W0417 23:36:42.605052 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.605078 kubelet[3202]: E0417 23:36:42.605066 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.610894 containerd[2002]: time="2026-04-17T23:36:42.610207932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q4h8,Uid:4ec281b8-d680-4c8e-9293-9cfd593f8d4b,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:42.652704 systemd[1]: Started cri-containerd-87981d27151a40dda85b7712492adbe82fe30d977171bb07985dc8415455fb2f.scope - libcontainer container 87981d27151a40dda85b7712492adbe82fe30d977171bb07985dc8415455fb2f. Apr 17 23:36:42.670668 containerd[2002]: time="2026-04-17T23:36:42.670437377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:42.670668 containerd[2002]: time="2026-04-17T23:36:42.670527756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:42.670668 containerd[2002]: time="2026-04-17T23:36:42.670550185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:42.671516 containerd[2002]: time="2026-04-17T23:36:42.671179329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:42.702114 systemd[1]: Started cri-containerd-bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619.scope - libcontainer container bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619. Apr 17 23:36:42.706388 kubelet[3202]: E0417 23:36:42.706364 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.706529 kubelet[3202]: W0417 23:36:42.706513 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.706632 kubelet[3202]: E0417 23:36:42.706618 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.707183 kubelet[3202]: E0417 23:36:42.707103 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.707384 kubelet[3202]: W0417 23:36:42.707278 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.707384 kubelet[3202]: E0417 23:36:42.707302 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.707839 kubelet[3202]: E0417 23:36:42.707732 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.707839 kubelet[3202]: W0417 23:36:42.707746 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.707839 kubelet[3202]: E0417 23:36:42.707760 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.708437 kubelet[3202]: E0417 23:36:42.708277 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.708437 kubelet[3202]: W0417 23:36:42.708291 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.708437 kubelet[3202]: E0417 23:36:42.708305 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.708966 kubelet[3202]: E0417 23:36:42.708825 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.708966 kubelet[3202]: W0417 23:36:42.708838 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.708966 kubelet[3202]: E0417 23:36:42.708887 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.709475 kubelet[3202]: E0417 23:36:42.709463 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.709634 kubelet[3202]: W0417 23:36:42.709541 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.709634 kubelet[3202]: E0417 23:36:42.709558 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.710181 kubelet[3202]: E0417 23:36:42.710033 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.710181 kubelet[3202]: W0417 23:36:42.710047 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.710181 kubelet[3202]: E0417 23:36:42.710061 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.711865 kubelet[3202]: E0417 23:36:42.711711 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.711865 kubelet[3202]: W0417 23:36:42.711726 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.711865 kubelet[3202]: E0417 23:36:42.711740 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.712227 kubelet[3202]: E0417 23:36:42.712138 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.712227 kubelet[3202]: W0417 23:36:42.712153 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.712227 kubelet[3202]: E0417 23:36:42.712166 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.713446 kubelet[3202]: E0417 23:36:42.713281 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.713446 kubelet[3202]: W0417 23:36:42.713295 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.713446 kubelet[3202]: E0417 23:36:42.713309 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.714977 kubelet[3202]: E0417 23:36:42.714317 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.714977 kubelet[3202]: W0417 23:36:42.714332 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.714977 kubelet[3202]: E0417 23:36:42.714345 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.715635 kubelet[3202]: E0417 23:36:42.715484 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.715635 kubelet[3202]: W0417 23:36:42.715499 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.715635 kubelet[3202]: E0417 23:36:42.715525 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.717256 kubelet[3202]: E0417 23:36:42.716182 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.717256 kubelet[3202]: W0417 23:36:42.716196 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.717256 kubelet[3202]: E0417 23:36:42.716219 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.717716 kubelet[3202]: E0417 23:36:42.717672 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.717716 kubelet[3202]: W0417 23:36:42.717686 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.717716 kubelet[3202]: E0417 23:36:42.717700 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.719341 kubelet[3202]: E0417 23:36:42.719089 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.719341 kubelet[3202]: W0417 23:36:42.719104 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.719341 kubelet[3202]: E0417 23:36:42.719274 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.732574 kubelet[3202]: E0417 23:36:42.732493 3202 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:36:42.732574 kubelet[3202]: W0417 23:36:42.732516 3202 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:36:42.732574 kubelet[3202]: E0417 23:36:42.732540 3202 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:36:42.768877 containerd[2002]: time="2026-04-17T23:36:42.768707180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q4h8,Uid:4ec281b8-d680-4c8e-9293-9cfd593f8d4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\"" Apr 17 23:36:42.772136 containerd[2002]: time="2026-04-17T23:36:42.770261747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d74b4486-wv8nk,Uid:d13d784e-c764-4f05-99d0-f9cffbf1cd82,Namespace:calico-system,Attempt:0,} returns sandbox id \"87981d27151a40dda85b7712492adbe82fe30d977171bb07985dc8415455fb2f\"" Apr 17 23:36:42.778023 containerd[2002]: time="2026-04-17T23:36:42.776309521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:36:43.876362 kubelet[3202]: E0417 23:36:43.876305 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:44.358198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519987745.mount: Deactivated successfully. Apr 17 23:36:44.481114 containerd[2002]: time="2026-04-17T23:36:44.481058674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:44.482250 containerd[2002]: time="2026-04-17T23:36:44.482154267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 17 23:36:44.484179 containerd[2002]: time="2026-04-17T23:36:44.483764193Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:44.486305 containerd[2002]: time="2026-04-17T23:36:44.486267602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:44.487300 containerd[2002]: time="2026-04-17T23:36:44.487068653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.710708349s" Apr 17 23:36:44.487412 containerd[2002]: time="2026-04-17T23:36:44.487317572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:36:44.488996 containerd[2002]: time="2026-04-17T23:36:44.488962487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:36:44.493542 containerd[2002]: time="2026-04-17T23:36:44.493503613Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:36:44.511921 containerd[2002]: time="2026-04-17T23:36:44.511753177Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762\"" Apr 17 23:36:44.512775 containerd[2002]: time="2026-04-17T23:36:44.512719670Z" level=info msg="StartContainer for \"6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762\"" Apr 17 23:36:44.564478 systemd[1]: run-containerd-runc-k8s.io-6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762-runc.1b8yLg.mount: Deactivated successfully. Apr 17 23:36:44.574122 systemd[1]: Started cri-containerd-6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762.scope - libcontainer container 6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762. Apr 17 23:36:44.612674 containerd[2002]: time="2026-04-17T23:36:44.612559643Z" level=info msg="StartContainer for \"6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762\" returns successfully" Apr 17 23:36:44.634216 systemd[1]: cri-containerd-6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762.scope: Deactivated successfully. Apr 17 23:36:44.773979 containerd[2002]: time="2026-04-17T23:36:44.743452212Z" level=info msg="shim disconnected" id=6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762 namespace=k8s.io Apr 17 23:36:44.773979 containerd[2002]: time="2026-04-17T23:36:44.773235600Z" level=warning msg="cleaning up after shim disconnected" id=6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762 namespace=k8s.io Apr 17 23:36:44.773979 containerd[2002]: time="2026-04-17T23:36:44.773257270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:45.358360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6723852f4e5e7113d06d8a5a3787bf632d6129001cfbbbb0d1468e6802854762-rootfs.mount: Deactivated successfully. Apr 17 23:36:45.876084 kubelet[3202]: E0417 23:36:45.876015 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:47.097236 containerd[2002]: time="2026-04-17T23:36:47.097182077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:47.100506 containerd[2002]: time="2026-04-17T23:36:47.100427710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 17 23:36:47.104709 containerd[2002]: time="2026-04-17T23:36:47.104661429Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:47.108575 containerd[2002]: time="2026-04-17T23:36:47.108521608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:47.109379 containerd[2002]: time="2026-04-17T23:36:47.109336086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.62033793s" Apr 17 23:36:47.109473 containerd[2002]: time="2026-04-17T23:36:47.109379896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:36:47.110969 containerd[2002]: time="2026-04-17T23:36:47.110936374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:36:47.160264 containerd[2002]: time="2026-04-17T23:36:47.159681069Z" level=info msg="CreateContainer within sandbox \"87981d27151a40dda85b7712492adbe82fe30d977171bb07985dc8415455fb2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:36:47.208194 containerd[2002]: time="2026-04-17T23:36:47.208057420Z" level=info msg="CreateContainer within sandbox \"87981d27151a40dda85b7712492adbe82fe30d977171bb07985dc8415455fb2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92\"" Apr 17 23:36:47.211724 containerd[2002]: time="2026-04-17T23:36:47.210170826Z" level=info msg="StartContainer for \"8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92\"" Apr 17 23:36:47.271254 systemd[1]: Started cri-containerd-8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92.scope - libcontainer container 8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92. Apr 17 23:36:47.357496 containerd[2002]: time="2026-04-17T23:36:47.357339479Z" level=info msg="StartContainer for \"8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92\" returns successfully" Apr 17 23:36:47.876528 kubelet[3202]: E0417 23:36:47.876475 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:48.011060 kubelet[3202]: I0417 23:36:48.010992 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66d74b4486-wv8nk" podStartSLOduration=1.676878378 podStartE2EDuration="6.010969626s" podCreationTimestamp="2026-04-17 23:36:42 +0000 UTC" firstStartedPulling="2026-04-17 23:36:42.776660311 +0000 UTC m=+22.084126153" lastFinishedPulling="2026-04-17 23:36:47.110751558 +0000 UTC m=+26.418217401" observedRunningTime="2026-04-17 23:36:48.010247743 +0000 UTC m=+27.317713605" watchObservedRunningTime="2026-04-17 23:36:48.010969626 +0000 UTC m=+27.318435465" Apr 17 23:36:48.120537 systemd[1]: run-containerd-runc-k8s.io-8bb601b1f2e1eb4e53fdb040a286fcd99e0b8dd87a22995705fce8188a51fd92-runc.FKCvuq.mount: Deactivated successfully. Apr 17 23:36:49.001524 kubelet[3202]: I0417 23:36:49.001458 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:49.876358 kubelet[3202]: E0417 23:36:49.876284 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:51.876542 kubelet[3202]: E0417 23:36:51.875885 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:53.876032 kubelet[3202]: E0417 23:36:53.875980 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:55.876160 kubelet[3202]: E0417 23:36:55.876102 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:57.876650 kubelet[3202]: E0417 23:36:57.876574 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:59.538792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214744161.mount: Deactivated successfully. Apr 17 23:36:59.593114 containerd[2002]: time="2026-04-17T23:36:59.584896520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:59.595240 containerd[2002]: time="2026-04-17T23:36:59.595195799Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:59.598522 containerd[2002]: time="2026-04-17T23:36:59.588899988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:36:59.599913 containerd[2002]: time="2026-04-17T23:36:59.598809971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:59.600205 containerd[2002]: time="2026-04-17T23:36:59.600171259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.489194203s" Apr 17 23:36:59.600334 containerd[2002]: time="2026-04-17T23:36:59.600310793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:36:59.606579 containerd[2002]: time="2026-04-17T23:36:59.606533611Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:36:59.649540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655114100.mount: Deactivated successfully. Apr 17 23:36:59.650594 containerd[2002]: time="2026-04-17T23:36:59.650440906Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27\"" Apr 17 23:36:59.654064 containerd[2002]: time="2026-04-17T23:36:59.654023344Z" level=info msg="StartContainer for \"b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27\"" Apr 17 23:36:59.718094 systemd[1]: Started cri-containerd-b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27.scope - libcontainer container b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27. Apr 17 23:36:59.759169 containerd[2002]: time="2026-04-17T23:36:59.759103407Z" level=info msg="StartContainer for \"b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27\" returns successfully" Apr 17 23:36:59.815555 systemd[1]: cri-containerd-b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27.scope: Deactivated successfully. Apr 17 23:36:59.876382 kubelet[3202]: E0417 23:36:59.876324 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:36:59.941102 containerd[2002]: time="2026-04-17T23:36:59.941014492Z" level=info msg="shim disconnected" id=b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27 namespace=k8s.io Apr 17 23:36:59.941102 containerd[2002]: time="2026-04-17T23:36:59.941097874Z" level=warning msg="cleaning up after shim disconnected" id=b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27 namespace=k8s.io Apr 17 23:36:59.941102 containerd[2002]: time="2026-04-17T23:36:59.941111315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:00.095896 containerd[2002]: time="2026-04-17T23:37:00.093520315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:37:00.537628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0929a75b961f4e80f88a09b533c0003c8712376e905a555ce296a08e2adfc27-rootfs.mount: Deactivated successfully. Apr 17 23:37:01.880100 kubelet[3202]: E0417 23:37:01.878706 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:37:03.876660 kubelet[3202]: E0417 23:37:03.876442 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:37:04.209753 kubelet[3202]: I0417 23:37:04.207693 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:05.144268 containerd[2002]: time="2026-04-17T23:37:05.143769539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:05.151442 containerd[2002]: time="2026-04-17T23:37:05.150880627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:37:05.155080 containerd[2002]: time="2026-04-17T23:37:05.155042757Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:05.175241 containerd[2002]: time="2026-04-17T23:37:05.174066844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:05.178934 containerd[2002]: time="2026-04-17T23:37:05.178876589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.085273644s" Apr 17 23:37:05.183514 containerd[2002]: time="2026-04-17T23:37:05.183461929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:37:05.217545 containerd[2002]: time="2026-04-17T23:37:05.217502255Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:37:05.246226 containerd[2002]: time="2026-04-17T23:37:05.246176451Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60\"" Apr 17 23:37:05.249131 containerd[2002]: time="2026-04-17T23:37:05.248158677Z" level=info msg="StartContainer for \"f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60\"" Apr 17 23:37:05.342842 systemd[1]: Started cri-containerd-f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60.scope - libcontainer container f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60. Apr 17 23:37:05.402410 containerd[2002]: time="2026-04-17T23:37:05.402203811Z" level=info msg="StartContainer for \"f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60\" returns successfully" Apr 17 23:37:05.880501 kubelet[3202]: E0417 23:37:05.876142 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:37:06.575818 systemd[1]: cri-containerd-f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60.scope: Deactivated successfully. Apr 17 23:37:06.616894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60-rootfs.mount: Deactivated successfully. Apr 17 23:37:06.620347 containerd[2002]: time="2026-04-17T23:37:06.620132908Z" level=info msg="shim disconnected" id=f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60 namespace=k8s.io Apr 17 23:37:06.620347 containerd[2002]: time="2026-04-17T23:37:06.620198349Z" level=warning msg="cleaning up after shim disconnected" id=f0271545b205551c4be5ced782a47c43ef0bf3ac159577e8abc248a52a63ab60 namespace=k8s.io Apr 17 23:37:06.620347 containerd[2002]: time="2026-04-17T23:37:06.620211340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:06.644731 kubelet[3202]: I0417 23:37:06.631956 3202 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:37:06.953055 kubelet[3202]: I0417 23:37:06.951862 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9566q\" (UniqueName: \"kubernetes.io/projected/e73a167e-3582-40a4-9b34-7572429fc278-kube-api-access-9566q\") pod \"goldmane-cccfbd5cf-8w592\" (UID: \"e73a167e-3582-40a4-9b34-7572429fc278\") " pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:06.953768 kubelet[3202]: I0417 23:37:06.953629 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sklt\" (UniqueName: \"kubernetes.io/projected/1537c3df-d617-414a-93ca-eeed9a0ad8c4-kube-api-access-5sklt\") pod \"calico-apiserver-846d8859d6-lh2jg\" (UID: \"1537c3df-d617-414a-93ca-eeed9a0ad8c4\") " pod="calico-system/calico-apiserver-846d8859d6-lh2jg" Apr 17 23:37:06.953768 kubelet[3202]: I0417 23:37:06.953702 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73a167e-3582-40a4-9b34-7572429fc278-config\") pod \"goldmane-cccfbd5cf-8w592\" (UID: \"e73a167e-3582-40a4-9b34-7572429fc278\") " pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:06.954115 kubelet[3202]: I0417 23:37:06.953929 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b369e486-7b42-48cf-8775-02be039bd5a7-tigera-ca-bundle\") pod \"calico-kube-controllers-744ddc4d96-zx6kx\" (UID: \"b369e486-7b42-48cf-8775-02be039bd5a7\") " pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" Apr 17 23:37:06.954280 kubelet[3202]: I0417 23:37:06.953977 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pw4\" (UniqueName: \"kubernetes.io/projected/5afec07c-296f-444d-884a-ca8b664e1c97-kube-api-access-p4pw4\") pod \"coredns-66bc5c9577-hdsmf\" (UID: \"5afec07c-296f-444d-884a-ca8b664e1c97\") " pod="kube-system/coredns-66bc5c9577-hdsmf" Apr 17 23:37:06.954280 kubelet[3202]: I0417 23:37:06.954220 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6754cc5-f109-476a-ab6e-ba6495a198d8-config-volume\") pod \"coredns-66bc5c9577-cq57f\" (UID: \"f6754cc5-f109-476a-ab6e-ba6495a198d8\") " pod="kube-system/coredns-66bc5c9577-cq57f" Apr 17 23:37:06.954515 kubelet[3202]: I0417 23:37:06.954248 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e73a167e-3582-40a4-9b34-7572429fc278-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-8w592\" (UID: \"e73a167e-3582-40a4-9b34-7572429fc278\") " pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:06.954515 kubelet[3202]: I0417 23:37:06.954379 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvnf4\" (UniqueName: \"kubernetes.io/projected/b369e486-7b42-48cf-8775-02be039bd5a7-kube-api-access-pvnf4\") pod \"calico-kube-controllers-744ddc4d96-zx6kx\" (UID: \"b369e486-7b42-48cf-8775-02be039bd5a7\") " pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" Apr 17 23:37:06.954814 kubelet[3202]: I0417 23:37:06.954410 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1537c3df-d617-414a-93ca-eeed9a0ad8c4-calico-apiserver-certs\") pod \"calico-apiserver-846d8859d6-lh2jg\" (UID: \"1537c3df-d617-414a-93ca-eeed9a0ad8c4\") " pod="calico-system/calico-apiserver-846d8859d6-lh2jg" Apr 17 23:37:06.954814 kubelet[3202]: I0417 23:37:06.954687 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5afec07c-296f-444d-884a-ca8b664e1c97-config-volume\") pod \"coredns-66bc5c9577-hdsmf\" (UID: \"5afec07c-296f-444d-884a-ca8b664e1c97\") " pod="kube-system/coredns-66bc5c9577-hdsmf" Apr 17 23:37:06.955147 kubelet[3202]: I0417 23:37:06.954898 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qkc8\" (UniqueName: \"kubernetes.io/projected/f6754cc5-f109-476a-ab6e-ba6495a198d8-kube-api-access-9qkc8\") pod \"coredns-66bc5c9577-cq57f\" (UID: \"f6754cc5-f109-476a-ab6e-ba6495a198d8\") " pod="kube-system/coredns-66bc5c9577-cq57f" Apr 17 23:37:06.955147 kubelet[3202]: I0417 23:37:06.954933 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e73a167e-3582-40a4-9b34-7572429fc278-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-8w592\" (UID: \"e73a167e-3582-40a4-9b34-7572429fc278\") " pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:06.976166 systemd[1]: Created slice kubepods-besteffort-podab3820f2_82fb_4fe2_a46c_ca486562fb4d.slice - libcontainer container kubepods-besteffort-podab3820f2_82fb_4fe2_a46c_ca486562fb4d.slice. Apr 17 23:37:06.978078 systemd[1]: Created slice kubepods-besteffort-podb369e486_7b42_48cf_8775_02be039bd5a7.slice - libcontainer container kubepods-besteffort-podb369e486_7b42_48cf_8775_02be039bd5a7.slice. Apr 17 23:37:06.984152 systemd[1]: Created slice kubepods-burstable-pod5afec07c_296f_444d_884a_ca8b664e1c97.slice - libcontainer container kubepods-burstable-pod5afec07c_296f_444d_884a_ca8b664e1c97.slice. Apr 17 23:37:06.997397 systemd[1]: Created slice kubepods-burstable-podf6754cc5_f109_476a_ab6e_ba6495a198d8.slice - libcontainer container kubepods-burstable-podf6754cc5_f109_476a_ab6e_ba6495a198d8.slice. Apr 17 23:37:07.010477 systemd[1]: Created slice kubepods-besteffort-pod1537c3df_d617_414a_93ca_eeed9a0ad8c4.slice - libcontainer container kubepods-besteffort-pod1537c3df_d617_414a_93ca_eeed9a0ad8c4.slice. Apr 17 23:37:07.020026 systemd[1]: Created slice kubepods-besteffort-pode73a167e_3582_40a4_9b34_7572429fc278.slice - libcontainer container kubepods-besteffort-pode73a167e_3582_40a4_9b34_7572429fc278.slice. Apr 17 23:37:07.031307 systemd[1]: Created slice kubepods-besteffort-pod73eb78f2_5007_4da6_b75e_823d0f53d5f3.slice - libcontainer container kubepods-besteffort-pod73eb78f2_5007_4da6_b75e_823d0f53d5f3.slice. Apr 17 23:37:07.059765 kubelet[3202]: I0417 23:37:07.057658 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zf8\" (UniqueName: \"kubernetes.io/projected/ab3820f2-82fb-4fe2-a46c-ca486562fb4d-kube-api-access-l8zf8\") pod \"calico-apiserver-846d8859d6-9b2kl\" (UID: \"ab3820f2-82fb-4fe2-a46c-ca486562fb4d\") " pod="calico-system/calico-apiserver-846d8859d6-9b2kl" Apr 17 23:37:07.059765 kubelet[3202]: I0417 23:37:07.057737 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab3820f2-82fb-4fe2-a46c-ca486562fb4d-calico-apiserver-certs\") pod \"calico-apiserver-846d8859d6-9b2kl\" (UID: \"ab3820f2-82fb-4fe2-a46c-ca486562fb4d\") " pod="calico-system/calico-apiserver-846d8859d6-9b2kl" Apr 17 23:37:07.059765 kubelet[3202]: I0417 23:37:07.058089 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-backend-key-pair\") pod \"whisker-b9bff547b-ptj7q\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:07.059765 kubelet[3202]: I0417 23:37:07.058177 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-nginx-config\") pod \"whisker-b9bff547b-ptj7q\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:07.059765 kubelet[3202]: I0417 23:37:07.058199 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwx8\" (UniqueName: \"kubernetes.io/projected/73eb78f2-5007-4da6-b75e-823d0f53d5f3-kube-api-access-sfwx8\") pod \"whisker-b9bff547b-ptj7q\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:07.060116 kubelet[3202]: I0417 23:37:07.058232 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-ca-bundle\") pod \"whisker-b9bff547b-ptj7q\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:07.228941 containerd[2002]: time="2026-04-17T23:37:07.228803040Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:37:07.259607 containerd[2002]: time="2026-04-17T23:37:07.259543012Z" level=info msg="CreateContainer within sandbox \"bdb48d0741c5fd9c7c7163c3946c508933c95f4685bf83b46eb8cb61419bc619\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed4349d8e87a6cfc6b16fe7a50247ee3ca289627e4b5cd254b5a07c5e4e21e16\"" Apr 17 23:37:07.265970 containerd[2002]: time="2026-04-17T23:37:07.265908930Z" level=info msg="StartContainer for \"ed4349d8e87a6cfc6b16fe7a50247ee3ca289627e4b5cd254b5a07c5e4e21e16\"" Apr 17 23:37:07.305306 containerd[2002]: time="2026-04-17T23:37:07.305204062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744ddc4d96-zx6kx,Uid:b369e486-7b42-48cf-8775-02be039bd5a7,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:07.309172 containerd[2002]: time="2026-04-17T23:37:07.308865555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-9b2kl,Uid:ab3820f2-82fb-4fe2-a46c-ca486562fb4d,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:07.310498 containerd[2002]: time="2026-04-17T23:37:07.310447017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cq57f,Uid:f6754cc5-f109-476a-ab6e-ba6495a198d8,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:07.313164 containerd[2002]: time="2026-04-17T23:37:07.312673787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hdsmf,Uid:5afec07c-296f-444d-884a-ca8b664e1c97,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:07.321073 containerd[2002]: time="2026-04-17T23:37:07.320761286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-lh2jg,Uid:1537c3df-d617-414a-93ca-eeed9a0ad8c4,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:07.338118 systemd[1]: Started cri-containerd-ed4349d8e87a6cfc6b16fe7a50247ee3ca289627e4b5cd254b5a07c5e4e21e16.scope - libcontainer container ed4349d8e87a6cfc6b16fe7a50247ee3ca289627e4b5cd254b5a07c5e4e21e16. Apr 17 23:37:07.341140 containerd[2002]: time="2026-04-17T23:37:07.341069175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8w592,Uid:e73a167e-3582-40a4-9b34-7572429fc278,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:07.342102 containerd[2002]: time="2026-04-17T23:37:07.342068678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9bff547b-ptj7q,Uid:73eb78f2-5007-4da6-b75e-823d0f53d5f3,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:07.470944 containerd[2002]: time="2026-04-17T23:37:07.470886511Z" level=info msg="StartContainer for \"ed4349d8e87a6cfc6b16fe7a50247ee3ca289627e4b5cd254b5a07c5e4e21e16\" returns successfully" Apr 17 23:37:07.892643 systemd[1]: Created slice kubepods-besteffort-podcdd62a40_a858_425f_a3e8_4e85787fe5f7.slice - libcontainer container kubepods-besteffort-podcdd62a40_a858_425f_a3e8_4e85787fe5f7.slice. Apr 17 23:37:07.901273 containerd[2002]: time="2026-04-17T23:37:07.901202629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttlp,Uid:cdd62a40-a858-425f-a3e8-4e85787fe5f7,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:09.316888 containerd[2002]: time="2026-04-17T23:37:09.315954621Z" level=error msg="Failed to destroy network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.317947 containerd[2002]: time="2026-04-17T23:37:09.317894581Z" level=error msg="Failed to destroy network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.320654 containerd[2002]: time="2026-04-17T23:37:09.320569277Z" level=error msg="encountered an error cleaning up failed sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.321110 containerd[2002]: time="2026-04-17T23:37:09.321071004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-9b2kl,Uid:ab3820f2-82fb-4fe2-a46c-ca486562fb4d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.321270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719-shm.mount: Deactivated successfully. Apr 17 23:37:09.335562 containerd[2002]: time="2026-04-17T23:37:09.335236889Z" level=error msg="Failed to destroy network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.336162 containerd[2002]: time="2026-04-17T23:37:09.335944943Z" level=error msg="encountered an error cleaning up failed sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.336162 containerd[2002]: time="2026-04-17T23:37:09.336026904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cq57f,Uid:f6754cc5-f109-476a-ab6e-ba6495a198d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.337803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d-shm.mount: Deactivated successfully. Apr 17 23:37:09.345370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644-shm.mount: Deactivated successfully. Apr 17 23:37:09.351340 containerd[2002]: time="2026-04-17T23:37:09.349298896Z" level=error msg="encountered an error cleaning up failed sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.351340 containerd[2002]: time="2026-04-17T23:37:09.349378541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9bff547b-ptj7q,Uid:73eb78f2-5007-4da6-b75e-823d0f53d5f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.378407 kubelet[3202]: E0417 23:37:09.378332 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.381557 kubelet[3202]: E0417 23:37:09.381497 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.381772 kubelet[3202]: E0417 23:37:09.381745 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.381839 kubelet[3202]: E0417 23:37:09.381786 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cq57f" Apr 17 23:37:09.381839 kubelet[3202]: E0417 23:37:09.381812 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cq57f" Apr 17 23:37:09.381964 kubelet[3202]: E0417 23:37:09.381927 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cq57f_kube-system(f6754cc5-f109-476a-ab6e-ba6495a198d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cq57f_kube-system(f6754cc5-f109-476a-ab6e-ba6495a198d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cq57f" podUID="f6754cc5-f109-476a-ab6e-ba6495a198d8" Apr 17 23:37:09.382312 kubelet[3202]: E0417 23:37:09.379290 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:09.382312 kubelet[3202]: E0417 23:37:09.382151 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9bff547b-ptj7q" Apr 17 23:37:09.382312 kubelet[3202]: E0417 23:37:09.382214 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b9bff547b-ptj7q_calico-system(73eb78f2-5007-4da6-b75e-823d0f53d5f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b9bff547b-ptj7q_calico-system(73eb78f2-5007-4da6-b75e-823d0f53d5f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9bff547b-ptj7q" podUID="73eb78f2-5007-4da6-b75e-823d0f53d5f3" Apr 17 23:37:09.382461 kubelet[3202]: E0417 23:37:09.381745 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-846d8859d6-9b2kl" Apr 17 23:37:09.382461 kubelet[3202]: E0417 23:37:09.382247 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-846d8859d6-9b2kl" Apr 17 23:37:09.382461 kubelet[3202]: E0417 23:37:09.382272 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-846d8859d6-9b2kl_calico-system(ab3820f2-82fb-4fe2-a46c-ca486562fb4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-846d8859d6-9b2kl_calico-system(ab3820f2-82fb-4fe2-a46c-ca486562fb4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-846d8859d6-9b2kl" podUID="ab3820f2-82fb-4fe2-a46c-ca486562fb4d" Apr 17 23:37:09.383635 containerd[2002]: time="2026-04-17T23:37:09.383430001Z" level=error msg="Failed to destroy network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.388644 containerd[2002]: time="2026-04-17T23:37:09.388563139Z" level=error msg="encountered an error cleaning up failed sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.390510 containerd[2002]: time="2026-04-17T23:37:09.388678693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8w592,Uid:e73a167e-3582-40a4-9b34-7572429fc278,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.390687 kubelet[3202]: E0417 23:37:09.390157 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.390687 kubelet[3202]: E0417 23:37:09.390227 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:09.390687 kubelet[3202]: E0417 23:37:09.390254 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8w592" Apr 17 23:37:09.390905 kubelet[3202]: E0417 23:37:09.390318 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-8w592_calico-system(e73a167e-3582-40a4-9b34-7572429fc278)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-8w592_calico-system(e73a167e-3582-40a4-9b34-7572429fc278)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8w592" podUID="e73a167e-3582-40a4-9b34-7572429fc278" Apr 17 23:37:09.391887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89-shm.mount: Deactivated successfully. Apr 17 23:37:09.424104 containerd[2002]: time="2026-04-17T23:37:09.424019193Z" level=error msg="Failed to destroy network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.425636 containerd[2002]: time="2026-04-17T23:37:09.425147424Z" level=error msg="encountered an error cleaning up failed sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.425636 containerd[2002]: time="2026-04-17T23:37:09.425226849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744ddc4d96-zx6kx,Uid:b369e486-7b42-48cf-8775-02be039bd5a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.426064 kubelet[3202]: E0417 23:37:09.425712 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.426064 kubelet[3202]: E0417 23:37:09.425768 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" Apr 17 23:37:09.426064 kubelet[3202]: E0417 23:37:09.425794 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" Apr 17 23:37:09.427628 kubelet[3202]: E0417 23:37:09.426063 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-744ddc4d96-zx6kx_calico-system(b369e486-7b42-48cf-8775-02be039bd5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-744ddc4d96-zx6kx_calico-system(b369e486-7b42-48cf-8775-02be039bd5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" podUID="b369e486-7b42-48cf-8775-02be039bd5a7" Apr 17 23:37:09.433618 containerd[2002]: time="2026-04-17T23:37:09.433492022Z" level=error msg="Failed to destroy network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.434252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea-shm.mount: Deactivated successfully. Apr 17 23:37:09.447163 containerd[2002]: time="2026-04-17T23:37:09.434718040Z" level=error msg="encountered an error cleaning up failed sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.447431 containerd[2002]: time="2026-04-17T23:37:09.447394498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-lh2jg,Uid:1537c3df-d617-414a-93ca-eeed9a0ad8c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.447642 containerd[2002]: time="2026-04-17T23:37:09.442086673Z" level=error msg="Failed to destroy network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.450243 kubelet[3202]: E0417 23:37:09.447840 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.450243 kubelet[3202]: E0417 23:37:09.447924 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-846d8859d6-lh2jg" Apr 17 23:37:09.450243 kubelet[3202]: E0417 23:37:09.447949 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-846d8859d6-lh2jg" Apr 17 23:37:09.450414 kubelet[3202]: E0417 23:37:09.448026 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-846d8859d6-lh2jg_calico-system(1537c3df-d617-414a-93ca-eeed9a0ad8c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-846d8859d6-lh2jg_calico-system(1537c3df-d617-414a-93ca-eeed9a0ad8c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-846d8859d6-lh2jg" podUID="1537c3df-d617-414a-93ca-eeed9a0ad8c4" Apr 17 23:37:09.452622 containerd[2002]: time="2026-04-17T23:37:09.447263341Z" level=error msg="Failed to destroy network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.452622 containerd[2002]: time="2026-04-17T23:37:09.452312536Z" level=error msg="encountered an error cleaning up failed sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.452622 containerd[2002]: time="2026-04-17T23:37:09.452398338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hdsmf,Uid:5afec07c-296f-444d-884a-ca8b664e1c97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.452622 containerd[2002]: time="2026-04-17T23:37:09.452326538Z" level=error msg="encountered an error cleaning up failed sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.452622 containerd[2002]: time="2026-04-17T23:37:09.452560429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttlp,Uid:cdd62a40-a858-425f-a3e8-4e85787fe5f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.452973 kubelet[3202]: E0417 23:37:09.452905 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.453039 kubelet[3202]: E0417 23:37:09.453003 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tttlp" Apr 17 23:37:09.454152 kubelet[3202]: E0417 23:37:09.453123 3202 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:09.454152 kubelet[3202]: E0417 23:37:09.453166 3202 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hdsmf" Apr 17 23:37:09.454152 kubelet[3202]: E0417 23:37:09.453190 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hdsmf" Apr 17 23:37:09.454152 kubelet[3202]: E0417 23:37:09.453193 3202 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tttlp" Apr 17 23:37:09.454373 kubelet[3202]: E0417 23:37:09.453255 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hdsmf_kube-system(5afec07c-296f-444d-884a-ca8b664e1c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hdsmf_kube-system(5afec07c-296f-444d-884a-ca8b664e1c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hdsmf" podUID="5afec07c-296f-444d-884a-ca8b664e1c97" Apr 17 23:37:09.454373 kubelet[3202]: E0417 23:37:09.453286 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tttlp_calico-system(cdd62a40-a858-425f-a3e8-4e85787fe5f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tttlp_calico-system(cdd62a40-a858-425f-a3e8-4e85787fe5f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:37:10.260379 containerd[2002]: time="2026-04-17T23:37:10.259959939Z" level=info msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" Apr 17 23:37:10.265036 kubelet[3202]: I0417 23:37:10.264847 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:10.265036 kubelet[3202]: I0417 23:37:10.264980 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:10.283061 kubelet[3202]: I0417 23:37:10.282943 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:10.287617 containerd[2002]: time="2026-04-17T23:37:10.287539992Z" level=info msg="Ensure that sandbox 8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d in task-service has been cleanup successfully" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298511 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298577 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298596 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298623 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298647 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:10.298831 kubelet[3202]: I0417 23:37:10.298667 3202 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:10.305226 containerd[2002]: time="2026-04-17T23:37:10.304530944Z" level=info msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" Apr 17 23:37:10.305640 containerd[2002]: time="2026-04-17T23:37:10.305589583Z" level=info msg="Ensure that sandbox b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6 in task-service has been cleanup successfully" Apr 17 23:37:10.311346 containerd[2002]: time="2026-04-17T23:37:10.311280010Z" level=info msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" Apr 17 23:37:10.311566 containerd[2002]: time="2026-04-17T23:37:10.311527587Z" level=info msg="Ensure that sandbox ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644 in task-service has been cleanup successfully" Apr 17 23:37:10.312019 containerd[2002]: time="2026-04-17T23:37:10.311981455Z" level=info msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" Apr 17 23:37:10.312207 containerd[2002]: time="2026-04-17T23:37:10.312171278Z" level=info msg="Ensure that sandbox b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89 in task-service has been cleanup successfully" Apr 17 23:37:10.315041 containerd[2002]: time="2026-04-17T23:37:10.314783490Z" level=info msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" Apr 17 23:37:10.315387 containerd[2002]: time="2026-04-17T23:37:10.315360859Z" level=info msg="Ensure that sandbox 62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719 in task-service has been cleanup successfully" Apr 17 23:37:10.326486 containerd[2002]: time="2026-04-17T23:37:10.326235585Z" level=info msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" Apr 17 23:37:10.327571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6-shm.mount: Deactivated successfully. Apr 17 23:37:10.328064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772-shm.mount: Deactivated successfully. Apr 17 23:37:10.328177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c-shm.mount: Deactivated successfully. Apr 17 23:37:10.340998 containerd[2002]: time="2026-04-17T23:37:10.340951045Z" level=info msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" Apr 17 23:37:10.341515 containerd[2002]: time="2026-04-17T23:37:10.341483096Z" level=info msg="Ensure that sandbox fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea in task-service has been cleanup successfully" Apr 17 23:37:10.343184 containerd[2002]: time="2026-04-17T23:37:10.343152775Z" level=info msg="Ensure that sandbox b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c in task-service has been cleanup successfully" Apr 17 23:37:10.349328 containerd[2002]: time="2026-04-17T23:37:10.337420869Z" level=info msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" Apr 17 23:37:10.349832 containerd[2002]: time="2026-04-17T23:37:10.349799703Z" level=info msg="Ensure that sandbox 9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772 in task-service has been cleanup successfully" Apr 17 23:37:10.552645 containerd[2002]: time="2026-04-17T23:37:10.550329400Z" level=error msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" failed" error="failed to destroy network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.553358 containerd[2002]: time="2026-04-17T23:37:10.553134992Z" level=error msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" failed" error="failed to destroy network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.553358 containerd[2002]: time="2026-04-17T23:37:10.553266003Z" level=error msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" failed" error="failed to destroy network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.553927 kubelet[3202]: E0417 23:37:10.553824 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:10.560667 containerd[2002]: time="2026-04-17T23:37:10.560602962Z" level=error msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" failed" error="failed to destroy network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.566709 containerd[2002]: time="2026-04-17T23:37:10.566646665Z" level=error msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" failed" error="failed to destroy network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.572878 kubelet[3202]: E0417 23:37:10.572812 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:10.575981 containerd[2002]: time="2026-04-17T23:37:10.575734792Z" level=error msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" failed" error="failed to destroy network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.579759 containerd[2002]: time="2026-04-17T23:37:10.579692532Z" level=error msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" failed" error="failed to destroy network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.581147 containerd[2002]: time="2026-04-17T23:37:10.581104224Z" level=error msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" failed" error="failed to destroy network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:37:10.584573 kubelet[3202]: E0417 23:37:10.584516 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:10.603797 kubelet[3202]: E0417 23:37:10.603114 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:10.603797 kubelet[3202]: E0417 23:37:10.603170 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:10.603797 kubelet[3202]: E0417 23:37:10.603191 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d"} Apr 17 23:37:10.603797 kubelet[3202]: E0417 23:37:10.603267 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab3820f2-82fb-4fe2-a46c-ca486562fb4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.604199 kubelet[3202]: E0417 23:37:10.603305 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab3820f2-82fb-4fe2-a46c-ca486562fb4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-846d8859d6-9b2kl" podUID="ab3820f2-82fb-4fe2-a46c-ca486562fb4d" Apr 17 23:37:10.604199 kubelet[3202]: E0417 23:37:10.603197 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644"} Apr 17 23:37:10.604199 kubelet[3202]: E0417 23:37:10.603356 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6754cc5-f109-476a-ab6e-ba6495a198d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.604199 kubelet[3202]: E0417 23:37:10.603379 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6754cc5-f109-476a-ab6e-ba6495a198d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cq57f" podUID="f6754cc5-f109-476a-ab6e-ba6495a198d8" Apr 17 23:37:10.604479 kubelet[3202]: E0417 23:37:10.603327 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:10.604479 kubelet[3202]: E0417 23:37:10.553914 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6"} Apr 17 23:37:10.604479 kubelet[3202]: E0417 23:37:10.572889 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772"} Apr 17 23:37:10.604479 kubelet[3202]: E0417 23:37:10.603435 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1537c3df-d617-414a-93ca-eeed9a0ad8c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.604479 kubelet[3202]: E0417 23:37:10.584578 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719"} Apr 17 23:37:10.604736 kubelet[3202]: E0417 23:37:10.603460 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1537c3df-d617-414a-93ca-eeed9a0ad8c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-846d8859d6-lh2jg" podUID="1537c3df-d617-414a-93ca-eeed9a0ad8c4" Apr 17 23:37:10.604736 kubelet[3202]: E0417 23:37:10.603467 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.604736 kubelet[3202]: E0417 23:37:10.603493 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9bff547b-ptj7q" podUID="73eb78f2-5007-4da6-b75e-823d0f53d5f3" Apr 17 23:37:10.604977 kubelet[3202]: E0417 23:37:10.603436 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.604977 kubelet[3202]: E0417 23:37:10.603503 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:10.604977 kubelet[3202]: E0417 23:37:10.603524 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c"} Apr 17 23:37:10.604977 kubelet[3202]: E0417 23:37:10.603529 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdd62a40-a858-425f-a3e8-4e85787fe5f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tttlp" podUID="cdd62a40-a858-425f-a3e8-4e85787fe5f7" Apr 17 23:37:10.605580 kubelet[3202]: E0417 23:37:10.603548 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5afec07c-296f-444d-884a-ca8b664e1c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.605580 kubelet[3202]: E0417 23:37:10.603572 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5afec07c-296f-444d-884a-ca8b664e1c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hdsmf" podUID="5afec07c-296f-444d-884a-ca8b664e1c97" Apr 17 23:37:10.605580 kubelet[3202]: E0417 23:37:10.603598 3202 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:10.605580 kubelet[3202]: E0417 23:37:10.603616 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89"} Apr 17 23:37:10.606272 kubelet[3202]: E0417 23:37:10.603655 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e73a167e-3582-40a4-9b34-7572429fc278\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.606272 kubelet[3202]: E0417 23:37:10.603682 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e73a167e-3582-40a4-9b34-7572429fc278\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8w592" podUID="e73a167e-3582-40a4-9b34-7572429fc278" Apr 17 23:37:10.606272 kubelet[3202]: E0417 23:37:10.603405 3202 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea"} Apr 17 23:37:10.606272 kubelet[3202]: E0417 23:37:10.603714 3202 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b369e486-7b42-48cf-8775-02be039bd5a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:37:10.606527 kubelet[3202]: E0417 23:37:10.603735 3202 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b369e486-7b42-48cf-8775-02be039bd5a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" podUID="b369e486-7b42-48cf-8775-02be039bd5a7" Apr 17 23:37:10.867542 kubelet[3202]: I0417 23:37:10.861981 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8q4h8" podStartSLOduration=6.432268439 podStartE2EDuration="28.843081242s" podCreationTimestamp="2026-04-17 23:36:42 +0000 UTC" firstStartedPulling="2026-04-17 23:36:42.773930619 +0000 UTC m=+22.081396462" lastFinishedPulling="2026-04-17 23:37:05.184743411 +0000 UTC m=+44.492209265" observedRunningTime="2026-04-17 23:37:08.246057214 +0000 UTC m=+47.553523076" watchObservedRunningTime="2026-04-17 23:37:10.843081242 +0000 UTC m=+50.150547103" Apr 17 23:37:11.310132 containerd[2002]: time="2026-04-17T23:37:11.310083248Z" level=info msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.597 [INFO][4655] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.598 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" iface="eth0" netns="/var/run/netns/cni-9b63d665-428f-72a5-d66b-04afd581f8b0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.598 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" iface="eth0" netns="/var/run/netns/cni-9b63d665-428f-72a5-d66b-04afd581f8b0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.599 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" iface="eth0" netns="/var/run/netns/cni-9b63d665-428f-72a5-d66b-04afd581f8b0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.599 [INFO][4655] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.599 [INFO][4655] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.947 [INFO][4662] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.947 [INFO][4662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.947 [INFO][4662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.957 [WARNING][4662] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.957 [INFO][4662] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.959 [INFO][4662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:11.965549 containerd[2002]: 2026-04-17 23:37:11.963 [INFO][4655] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:11.968030 containerd[2002]: time="2026-04-17T23:37:11.967987642Z" level=info msg="TearDown network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" successfully" Apr 17 23:37:11.968135 containerd[2002]: time="2026-04-17T23:37:11.968031718Z" level=info msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" returns successfully" Apr 17 23:37:11.971224 systemd[1]: run-netns-cni\x2d9b63d665\x2d428f\x2d72a5\x2dd66b\x2d04afd581f8b0.mount: Deactivated successfully. Apr 17 23:37:12.042490 kubelet[3202]: I0417 23:37:12.042428 3202 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-backend-key-pair\") pod \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " Apr 17 23:37:12.042490 kubelet[3202]: I0417 23:37:12.042484 3202 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-nginx-config\") pod \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " Apr 17 23:37:12.043174 kubelet[3202]: I0417 23:37:12.042535 3202 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-ca-bundle\") pod \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " Apr 17 23:37:12.043174 kubelet[3202]: I0417 23:37:12.042578 3202 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwx8\" (UniqueName: \"kubernetes.io/projected/73eb78f2-5007-4da6-b75e-823d0f53d5f3-kube-api-access-sfwx8\") pod \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\" (UID: \"73eb78f2-5007-4da6-b75e-823d0f53d5f3\") " Apr 17 23:37:12.050510 kubelet[3202]: I0417 23:37:12.048911 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "73eb78f2-5007-4da6-b75e-823d0f53d5f3" (UID: "73eb78f2-5007-4da6-b75e-823d0f53d5f3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:37:12.050510 kubelet[3202]: I0417 23:37:12.043819 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "73eb78f2-5007-4da6-b75e-823d0f53d5f3" (UID: "73eb78f2-5007-4da6-b75e-823d0f53d5f3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:37:12.053797 kubelet[3202]: I0417 23:37:12.053753 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eb78f2-5007-4da6-b75e-823d0f53d5f3-kube-api-access-sfwx8" (OuterVolumeSpecName: "kube-api-access-sfwx8") pod "73eb78f2-5007-4da6-b75e-823d0f53d5f3" (UID: "73eb78f2-5007-4da6-b75e-823d0f53d5f3"). InnerVolumeSpecName "kube-api-access-sfwx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:37:12.054671 kubelet[3202]: I0417 23:37:12.054639 3202 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "73eb78f2-5007-4da6-b75e-823d0f53d5f3" (UID: "73eb78f2-5007-4da6-b75e-823d0f53d5f3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:37:12.057443 systemd[1]: var-lib-kubelet-pods-73eb78f2\x2d5007\x2d4da6\x2db75e\x2d823d0f53d5f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfwx8.mount: Deactivated successfully. Apr 17 23:37:12.057746 systemd[1]: var-lib-kubelet-pods-73eb78f2\x2d5007\x2d4da6\x2db75e\x2d823d0f53d5f3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:37:12.144447 kubelet[3202]: I0417 23:37:12.143420 3202 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-ca-bundle\") on node \"ip-172-31-30-7\" DevicePath \"\"" Apr 17 23:37:12.144447 kubelet[3202]: I0417 23:37:12.143496 3202 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sfwx8\" (UniqueName: \"kubernetes.io/projected/73eb78f2-5007-4da6-b75e-823d0f53d5f3-kube-api-access-sfwx8\") on node \"ip-172-31-30-7\" DevicePath \"\"" Apr 17 23:37:12.144447 kubelet[3202]: I0417 23:37:12.143515 3202 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73eb78f2-5007-4da6-b75e-823d0f53d5f3-whisker-backend-key-pair\") on node \"ip-172-31-30-7\" DevicePath \"\"" Apr 17 23:37:12.144447 kubelet[3202]: I0417 23:37:12.143529 3202 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/73eb78f2-5007-4da6-b75e-823d0f53d5f3-nginx-config\") on node \"ip-172-31-30-7\" DevicePath \"\"" Apr 17 23:37:12.330632 systemd[1]: Removed slice kubepods-besteffort-pod73eb78f2_5007_4da6_b75e_823d0f53d5f3.slice - libcontainer container kubepods-besteffort-pod73eb78f2_5007_4da6_b75e_823d0f53d5f3.slice. Apr 17 23:37:12.469052 systemd[1]: Created slice kubepods-besteffort-pod9e279fa9_df0f_4433_89a5_1177ce8e3e27.slice - libcontainer container kubepods-besteffort-pod9e279fa9_df0f_4433_89a5_1177ce8e3e27.slice. Apr 17 23:37:12.546211 kubelet[3202]: I0417 23:37:12.545961 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9e279fa9-df0f-4433-89a5-1177ce8e3e27-nginx-config\") pod \"whisker-58cf4cf77c-6mg8m\" (UID: \"9e279fa9-df0f-4433-89a5-1177ce8e3e27\") " pod="calico-system/whisker-58cf4cf77c-6mg8m" Apr 17 23:37:12.546211 kubelet[3202]: I0417 23:37:12.546018 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e279fa9-df0f-4433-89a5-1177ce8e3e27-whisker-backend-key-pair\") pod \"whisker-58cf4cf77c-6mg8m\" (UID: \"9e279fa9-df0f-4433-89a5-1177ce8e3e27\") " pod="calico-system/whisker-58cf4cf77c-6mg8m" Apr 17 23:37:12.546211 kubelet[3202]: I0417 23:37:12.546061 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd55g\" (UniqueName: \"kubernetes.io/projected/9e279fa9-df0f-4433-89a5-1177ce8e3e27-kube-api-access-nd55g\") pod \"whisker-58cf4cf77c-6mg8m\" (UID: \"9e279fa9-df0f-4433-89a5-1177ce8e3e27\") " pod="calico-system/whisker-58cf4cf77c-6mg8m" Apr 17 23:37:12.546211 kubelet[3202]: I0417 23:37:12.546090 3202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e279fa9-df0f-4433-89a5-1177ce8e3e27-whisker-ca-bundle\") pod \"whisker-58cf4cf77c-6mg8m\" (UID: \"9e279fa9-df0f-4433-89a5-1177ce8e3e27\") " pod="calico-system/whisker-58cf4cf77c-6mg8m" Apr 17 23:37:12.781544 containerd[2002]: time="2026-04-17T23:37:12.779020562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cf4cf77c-6mg8m,Uid:9e279fa9-df0f-4433-89a5-1177ce8e3e27,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:12.921073 kubelet[3202]: I0417 23:37:12.920429 3202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73eb78f2-5007-4da6-b75e-823d0f53d5f3" path="/var/lib/kubelet/pods/73eb78f2-5007-4da6-b75e-823d0f53d5f3/volumes" Apr 17 23:37:13.084046 kernel: calico-node[4701]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:37:13.346663 systemd-networkd[1894]: calia9b43f5a25c: Link UP Apr 17 23:37:13.348732 systemd-networkd[1894]: calia9b43f5a25c: Gained carrier Apr 17 23:37:13.363452 (udev-worker)[4822]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:12.859 [ERROR][4781] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:12.896 [INFO][4781] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0 whisker-58cf4cf77c- calico-system 9e279fa9-df0f-4433-89a5-1177ce8e3e27 935 0 2026-04-17 23:37:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58cf4cf77c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-7 whisker-58cf4cf77c-6mg8m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia9b43f5a25c [] [] }} ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:12.896 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.090 [INFO][4800] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" HandleID="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Workload="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.191 [INFO][4800] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" HandleID="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Workload="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"whisker-58cf4cf77c-6mg8m", "timestamp":"2026-04-17 23:37:13.090371937 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fef20)} Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.191 [INFO][4800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.192 [INFO][4800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.192 [INFO][4800] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.211 [INFO][4800] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.222 [INFO][4800] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.232 [INFO][4800] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.236 [INFO][4800] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.239 [INFO][4800] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.239 [INFO][4800] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.245 [INFO][4800] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401 Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.252 [INFO][4800] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.263 [INFO][4800] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.193/26] block=192.168.14.192/26 handle="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.264 [INFO][4800] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.193/26] handle="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" host="ip-172-31-30-7" Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.264 [INFO][4800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:13.385291 containerd[2002]: 2026-04-17 23:37:13.264 [INFO][4800] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.193/26] IPv6=[] ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" HandleID="k8s-pod-network.901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Workload="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.278 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0", GenerateName:"whisker-58cf4cf77c-", Namespace:"calico-system", SelfLink:"", UID:"9e279fa9-df0f-4433-89a5-1177ce8e3e27", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58cf4cf77c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"whisker-58cf4cf77c-6mg8m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia9b43f5a25c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.278 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.193/32] ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.278 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b43f5a25c ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.330 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.332 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0", GenerateName:"whisker-58cf4cf77c-", Namespace:"calico-system", SelfLink:"", UID:"9e279fa9-df0f-4433-89a5-1177ce8e3e27", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58cf4cf77c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401", Pod:"whisker-58cf4cf77c-6mg8m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia9b43f5a25c", MAC:"e2:b5:99:77:a2:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:13.392401 containerd[2002]: 2026-04-17 23:37:13.371 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401" Namespace="calico-system" Pod="whisker-58cf4cf77c-6mg8m" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--58cf4cf77c--6mg8m-eth0" Apr 17 23:37:14.089366 containerd[2002]: time="2026-04-17T23:37:14.082989834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:14.089753 containerd[2002]: time="2026-04-17T23:37:14.089350050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:14.089753 containerd[2002]: time="2026-04-17T23:37:14.089375021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:14.089753 containerd[2002]: time="2026-04-17T23:37:14.089531228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:14.251301 systemd[1]: Started cri-containerd-901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401.scope - libcontainer container 901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401. Apr 17 23:37:14.273402 systemd-networkd[1894]: vxlan.calico: Link UP Apr 17 23:37:14.273413 systemd-networkd[1894]: vxlan.calico: Gained carrier Apr 17 23:37:14.366365 (udev-worker)[4821]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:37:14.425163 containerd[2002]: time="2026-04-17T23:37:14.422381370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cf4cf77c-6mg8m,Uid:9e279fa9-df0f-4433-89a5-1177ce8e3e27,Namespace:calico-system,Attempt:0,} returns sandbox id \"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401\"" Apr 17 23:37:14.476551 containerd[2002]: time="2026-04-17T23:37:14.476317733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:37:15.048109 systemd-networkd[1894]: calia9b43f5a25c: Gained IPv6LL Apr 17 23:37:15.433913 systemd-networkd[1894]: vxlan.calico: Gained IPv6LL Apr 17 23:37:16.176545 containerd[2002]: time="2026-04-17T23:37:16.176460251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:37:16.218561 containerd[2002]: time="2026-04-17T23:37:16.218413148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.742039155s" Apr 17 23:37:16.219244 containerd[2002]: time="2026-04-17T23:37:16.218718183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:37:16.219244 containerd[2002]: time="2026-04-17T23:37:16.219036011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.220351 containerd[2002]: time="2026-04-17T23:37:16.220161779Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.224312 containerd[2002]: time="2026-04-17T23:37:16.224261008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.234399 containerd[2002]: time="2026-04-17T23:37:16.234336610Z" level=info msg="CreateContainer within sandbox \"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:37:16.253458 containerd[2002]: time="2026-04-17T23:37:16.252577709Z" level=info msg="CreateContainer within sandbox \"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"24d1781d1927fd40b246d4063c858a4699922524b1c5810b17db4179a93d3dc2\"" Apr 17 23:37:16.256817 containerd[2002]: time="2026-04-17T23:37:16.256762003Z" level=info msg="StartContainer for \"24d1781d1927fd40b246d4063c858a4699922524b1c5810b17db4179a93d3dc2\"" Apr 17 23:37:16.318139 systemd[1]: Started cri-containerd-24d1781d1927fd40b246d4063c858a4699922524b1c5810b17db4179a93d3dc2.scope - libcontainer container 24d1781d1927fd40b246d4063c858a4699922524b1c5810b17db4179a93d3dc2. Apr 17 23:37:16.376549 containerd[2002]: time="2026-04-17T23:37:16.376495564Z" level=info msg="StartContainer for \"24d1781d1927fd40b246d4063c858a4699922524b1c5810b17db4179a93d3dc2\" returns successfully" Apr 17 23:37:16.380157 containerd[2002]: time="2026-04-17T23:37:16.380108721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:37:17.808124 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.14.192:123 Apr 17 23:37:17.808224 ntpd[1957]: Listen normally on 9 calia9b43f5a25c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:37:17.809771 ntpd[1957]: 17 Apr 23:37:17 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.14.192:123 Apr 17 23:37:17.809771 ntpd[1957]: 17 Apr 23:37:17 ntpd[1957]: Listen normally on 9 calia9b43f5a25c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:37:17.809771 ntpd[1957]: 17 Apr 23:37:17 ntpd[1957]: Listen normally on 10 vxlan.calico [fe80::64d0:1ff:fed4:5ea6%5]:123 Apr 17 23:37:17.808287 ntpd[1957]: Listen normally on 10 vxlan.calico [fe80::64d0:1ff:fed4:5ea6%5]:123 Apr 17 23:37:18.352569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810003798.mount: Deactivated successfully. Apr 17 23:37:18.370431 containerd[2002]: time="2026-04-17T23:37:18.370377477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:18.371783 containerd[2002]: time="2026-04-17T23:37:18.371619516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:37:18.373169 containerd[2002]: time="2026-04-17T23:37:18.372790677Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:18.375996 containerd[2002]: time="2026-04-17T23:37:18.375959767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:18.378310 containerd[2002]: time="2026-04-17T23:37:18.377502304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.997340366s" Apr 17 23:37:18.378310 containerd[2002]: time="2026-04-17T23:37:18.377544679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:37:18.395169 containerd[2002]: time="2026-04-17T23:37:18.395109840Z" level=info msg="CreateContainer within sandbox \"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:37:18.411560 containerd[2002]: time="2026-04-17T23:37:18.411513010Z" level=info msg="CreateContainer within sandbox \"901ffd6d92375ee34fcb20e74ee7b99b107c38b0a51d5c571f1bf269e90c1401\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"00dc2f5b23fa6af29a812b129afa90497d1003805f5f9c6032b8580194b3cd8b\"" Apr 17 23:37:18.414442 containerd[2002]: time="2026-04-17T23:37:18.413023481Z" level=info msg="StartContainer for \"00dc2f5b23fa6af29a812b129afa90497d1003805f5f9c6032b8580194b3cd8b\"" Apr 17 23:37:18.459202 systemd[1]: Started cri-containerd-00dc2f5b23fa6af29a812b129afa90497d1003805f5f9c6032b8580194b3cd8b.scope - libcontainer container 00dc2f5b23fa6af29a812b129afa90497d1003805f5f9c6032b8580194b3cd8b. Apr 17 23:37:18.522717 containerd[2002]: time="2026-04-17T23:37:18.522650579Z" level=info msg="StartContainer for \"00dc2f5b23fa6af29a812b129afa90497d1003805f5f9c6032b8580194b3cd8b\" returns successfully" Apr 17 23:37:19.182261 systemd[1]: Started sshd@7-172.31.30.7:22-20.229.252.112:41046.service - OpenSSH per-connection server daemon (20.229.252.112:41046). Apr 17 23:37:20.208898 sshd[5051]: Accepted publickey for core from 20.229.252.112 port 41046 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:20.213377 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:20.219340 systemd-logind[1963]: New session 8 of user core. Apr 17 23:37:20.227367 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:37:20.884679 containerd[2002]: time="2026-04-17T23:37:20.884485966Z" level=info msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.009 [WARNING][5089] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.009 [INFO][5089] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.009 [INFO][5089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" iface="eth0" netns="" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.009 [INFO][5089] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.009 [INFO][5089] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.181 [INFO][5096] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.182 [INFO][5096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.182 [INFO][5096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.191 [WARNING][5096] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.191 [INFO][5096] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.194 [INFO][5096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:21.205775 containerd[2002]: 2026-04-17 23:37:21.199 [INFO][5089] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.205775 containerd[2002]: time="2026-04-17T23:37:21.205752844Z" level=info msg="TearDown network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" successfully" Apr 17 23:37:21.206445 containerd[2002]: time="2026-04-17T23:37:21.205784209Z" level=info msg="StopPodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" returns successfully" Apr 17 23:37:21.234234 containerd[2002]: time="2026-04-17T23:37:21.233051905Z" level=info msg="RemovePodSandbox for \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" Apr 17 23:37:21.234234 containerd[2002]: time="2026-04-17T23:37:21.233123521Z" level=info msg="Forcibly stopping sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\"" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.307 [WARNING][5110] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" WorkloadEndpoint="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.307 [INFO][5110] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.307 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" iface="eth0" netns="" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.307 [INFO][5110] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.307 [INFO][5110] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.342 [INFO][5117] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.343 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.344 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.354 [WARNING][5117] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.354 [INFO][5117] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" HandleID="k8s-pod-network.62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Workload="ip--172--31--30--7-k8s-whisker--b9bff547b--ptj7q-eth0" Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.356 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:21.362723 containerd[2002]: 2026-04-17 23:37:21.359 [INFO][5110] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719" Apr 17 23:37:21.362723 containerd[2002]: time="2026-04-17T23:37:21.361471069Z" level=info msg="TearDown network for sandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" successfully" Apr 17 23:37:21.372754 containerd[2002]: time="2026-04-17T23:37:21.372439561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:21.372754 containerd[2002]: time="2026-04-17T23:37:21.372537776Z" level=info msg="RemovePodSandbox \"62913abf22df07aad4bef3297f781317c3b9ac48fb5f81d03ea22c5eac14b719\" returns successfully" Apr 17 23:37:21.665232 sshd[5051]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:21.670108 systemd[1]: sshd@7-172.31.30.7:22-20.229.252.112:41046.service: Deactivated successfully. Apr 17 23:37:21.670365 systemd-logind[1963]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:37:21.673843 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:37:21.676570 systemd-logind[1963]: Removed session 8. Apr 17 23:37:21.877086 containerd[2002]: time="2026-04-17T23:37:21.876980329Z" level=info msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" Apr 17 23:37:21.877086 containerd[2002]: time="2026-04-17T23:37:21.877075816Z" level=info msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" Apr 17 23:37:21.971695 kubelet[3202]: I0417 23:37:21.968929 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58cf4cf77c-6mg8m" podStartSLOduration=6.003252019 podStartE2EDuration="9.956908569s" podCreationTimestamp="2026-04-17 23:37:12 +0000 UTC" firstStartedPulling="2026-04-17 23:37:14.436680385 +0000 UTC m=+53.744146225" lastFinishedPulling="2026-04-17 23:37:18.390336934 +0000 UTC m=+57.697802775" observedRunningTime="2026-04-17 23:37:19.408775236 +0000 UTC m=+58.716241100" watchObservedRunningTime="2026-04-17 23:37:21.956908569 +0000 UTC m=+61.264374425" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.961 [INFO][5140] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.961 [INFO][5140] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" iface="eth0" netns="/var/run/netns/cni-434eab3f-e532-ad7c-f391-156afe84dd43" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.963 [INFO][5140] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" iface="eth0" netns="/var/run/netns/cni-434eab3f-e532-ad7c-f391-156afe84dd43" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.963 [INFO][5140] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" iface="eth0" netns="/var/run/netns/cni-434eab3f-e532-ad7c-f391-156afe84dd43" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.963 [INFO][5140] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:21.963 [INFO][5140] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.006 [INFO][5160] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.007 [INFO][5160] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.007 [INFO][5160] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.017 [WARNING][5160] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.017 [INFO][5160] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.020 [INFO][5160] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:22.026368 containerd[2002]: 2026-04-17 23:37:22.023 [INFO][5140] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:37:22.028298 containerd[2002]: time="2026-04-17T23:37:22.026591530Z" level=info msg="TearDown network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" successfully" Apr 17 23:37:22.028298 containerd[2002]: time="2026-04-17T23:37:22.026627494Z" level=info msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" returns successfully" Apr 17 23:37:22.035574 systemd[1]: run-netns-cni\x2d434eab3f\x2de532\x2dad7c\x2df391\x2d156afe84dd43.mount: Deactivated successfully. Apr 17 23:37:22.037902 containerd[2002]: time="2026-04-17T23:37:22.037220676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744ddc4d96-zx6kx,Uid:b369e486-7b42-48cf-8775-02be039bd5a7,Namespace:calico-system,Attempt:1,}" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.955 [INFO][5149] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.956 [INFO][5149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" iface="eth0" netns="/var/run/netns/cni-ba533b3d-92e8-809c-4e70-152f80029e7d" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.957 [INFO][5149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" iface="eth0" netns="/var/run/netns/cni-ba533b3d-92e8-809c-4e70-152f80029e7d" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.962 [INFO][5149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" iface="eth0" netns="/var/run/netns/cni-ba533b3d-92e8-809c-4e70-152f80029e7d" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.962 [INFO][5149] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:21.962 [INFO][5149] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.016 [INFO][5161] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.017 [INFO][5161] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.020 [INFO][5161] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.038 [WARNING][5161] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.038 [INFO][5161] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.041 [INFO][5161] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:22.046427 containerd[2002]: 2026-04-17 23:37:22.043 [INFO][5149] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:37:22.050358 containerd[2002]: time="2026-04-17T23:37:22.046606039Z" level=info msg="TearDown network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" successfully" Apr 17 23:37:22.050358 containerd[2002]: time="2026-04-17T23:37:22.046638764Z" level=info msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" returns successfully" Apr 17 23:37:22.050358 containerd[2002]: time="2026-04-17T23:37:22.050034880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hdsmf,Uid:5afec07c-296f-444d-884a-ca8b664e1c97,Namespace:kube-system,Attempt:1,}" Apr 17 23:37:22.055181 systemd[1]: run-netns-cni\x2dba533b3d\x2d92e8\x2d809c\x2d4e70\x2d152f80029e7d.mount: Deactivated successfully. Apr 17 23:37:22.342386 systemd-networkd[1894]: caliba41b95f357: Link UP Apr 17 23:37:22.344796 (udev-worker)[5210]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:37:22.345434 systemd-networkd[1894]: caliba41b95f357: Gained carrier Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.138 [INFO][5172] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0 calico-kube-controllers-744ddc4d96- calico-system b369e486-7b42-48cf-8775-02be039bd5a7 1010 0 2026-04-17 23:36:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:744ddc4d96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-7 calico-kube-controllers-744ddc4d96-zx6kx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliba41b95f357 [] [] }} ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.138 [INFO][5172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.240 [INFO][5196] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" HandleID="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.250 [INFO][5196] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" HandleID="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000301eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"calico-kube-controllers-744ddc4d96-zx6kx", "timestamp":"2026-04-17 23:37:22.24049686 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112000)} Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.250 [INFO][5196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.250 [INFO][5196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.250 [INFO][5196] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.256 [INFO][5196] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.262 [INFO][5196] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.269 [INFO][5196] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.273 [INFO][5196] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.276 [INFO][5196] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.276 [INFO][5196] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.279 [INFO][5196] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.325 [INFO][5196] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.333 [INFO][5196] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.194/26] block=192.168.14.192/26 handle="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.334 [INFO][5196] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.194/26] handle="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" host="ip-172-31-30-7" Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.334 [INFO][5196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:22.387795 containerd[2002]: 2026-04-17 23:37:22.334 [INFO][5196] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.194/26] IPv6=[] ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" HandleID="k8s-pod-network.081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.337 [INFO][5172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0", GenerateName:"calico-kube-controllers-744ddc4d96-", Namespace:"calico-system", SelfLink:"", UID:"b369e486-7b42-48cf-8775-02be039bd5a7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744ddc4d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"calico-kube-controllers-744ddc4d96-zx6kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba41b95f357", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.337 [INFO][5172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.194/32] ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.337 [INFO][5172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba41b95f357 ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.351 [INFO][5172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.355 [INFO][5172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0", GenerateName:"calico-kube-controllers-744ddc4d96-", Namespace:"calico-system", SelfLink:"", UID:"b369e486-7b42-48cf-8775-02be039bd5a7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744ddc4d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b", Pod:"calico-kube-controllers-744ddc4d96-zx6kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba41b95f357", MAC:"22:7e:d2:51:0e:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:22.390686 containerd[2002]: 2026-04-17 23:37:22.381 [INFO][5172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b" Namespace="calico-system" Pod="calico-kube-controllers-744ddc4d96-zx6kx" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:37:22.467875 containerd[2002]: time="2026-04-17T23:37:22.467499316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:22.467875 containerd[2002]: time="2026-04-17T23:37:22.467585837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:22.467875 containerd[2002]: time="2026-04-17T23:37:22.467625524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:22.467875 containerd[2002]: time="2026-04-17T23:37:22.467761114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:22.495841 (udev-worker)[5212]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:37:22.503000 systemd-networkd[1894]: cali7302cfc6fe9: Link UP Apr 17 23:37:22.507262 systemd-networkd[1894]: cali7302cfc6fe9: Gained carrier Apr 17 23:37:22.525100 systemd[1]: Started cri-containerd-081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b.scope - libcontainer container 081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b. Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.169 [INFO][5182] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0 coredns-66bc5c9577- kube-system 5afec07c-296f-444d-884a-ca8b664e1c97 1009 0 2026-04-17 23:36:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-7 coredns-66bc5c9577-hdsmf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7302cfc6fe9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.170 [INFO][5182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.249 [INFO][5201] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" HandleID="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.257 [INFO][5201] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" HandleID="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-7", "pod":"coredns-66bc5c9577-hdsmf", "timestamp":"2026-04-17 23:37:22.249128716 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bac60)} Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.257 [INFO][5201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.334 [INFO][5201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.334 [INFO][5201] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.364 [INFO][5201] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.387 [INFO][5201] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.408 [INFO][5201] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.416 [INFO][5201] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.428 [INFO][5201] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.428 [INFO][5201] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.439 [INFO][5201] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.452 [INFO][5201] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.471 [INFO][5201] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.195/26] block=192.168.14.192/26 handle="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.472 [INFO][5201] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.195/26] handle="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" host="ip-172-31-30-7" Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.472 [INFO][5201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:22.543618 containerd[2002]: 2026-04-17 23:37:22.472 [INFO][5201] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.195/26] IPv6=[] ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" HandleID="k8s-pod-network.a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.492 [INFO][5182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5afec07c-296f-444d-884a-ca8b664e1c97", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"coredns-66bc5c9577-hdsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7302cfc6fe9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.492 [INFO][5182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.195/32] ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.492 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7302cfc6fe9 ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.514 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.515 [INFO][5182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5afec07c-296f-444d-884a-ca8b664e1c97", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff", Pod:"coredns-66bc5c9577-hdsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7302cfc6fe9", MAC:"be:a7:a9:62:0c:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:22.545328 containerd[2002]: 2026-04-17 23:37:22.540 [INFO][5182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff" Namespace="kube-system" Pod="coredns-66bc5c9577-hdsmf" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:37:22.581906 containerd[2002]: time="2026-04-17T23:37:22.581254952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:22.581906 containerd[2002]: time="2026-04-17T23:37:22.581502214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:22.581906 containerd[2002]: time="2026-04-17T23:37:22.581530760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:22.581906 containerd[2002]: time="2026-04-17T23:37:22.581760493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:22.609131 systemd[1]: Started cri-containerd-a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff.scope - libcontainer container a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff. Apr 17 23:37:22.723770 containerd[2002]: time="2026-04-17T23:37:22.723701308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hdsmf,Uid:5afec07c-296f-444d-884a-ca8b664e1c97,Namespace:kube-system,Attempt:1,} returns sandbox id \"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff\"" Apr 17 23:37:22.734743 containerd[2002]: time="2026-04-17T23:37:22.733143630Z" level=info msg="CreateContainer within sandbox \"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:37:22.737182 containerd[2002]: time="2026-04-17T23:37:22.737061426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744ddc4d96-zx6kx,Uid:b369e486-7b42-48cf-8775-02be039bd5a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b\"" Apr 17 23:37:22.741237 containerd[2002]: time="2026-04-17T23:37:22.741175900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:37:22.782574 containerd[2002]: time="2026-04-17T23:37:22.782511768Z" level=info msg="CreateContainer within sandbox \"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a8fe403d9d31cb145dbec42d7ba2ac2f35b61f93da521f76c29e39444eebf21\"" Apr 17 23:37:22.784236 containerd[2002]: time="2026-04-17T23:37:22.783793637Z" level=info msg="StartContainer for \"6a8fe403d9d31cb145dbec42d7ba2ac2f35b61f93da521f76c29e39444eebf21\"" Apr 17 23:37:22.817148 systemd[1]: Started cri-containerd-6a8fe403d9d31cb145dbec42d7ba2ac2f35b61f93da521f76c29e39444eebf21.scope - libcontainer container 6a8fe403d9d31cb145dbec42d7ba2ac2f35b61f93da521f76c29e39444eebf21. Apr 17 23:37:22.868910 containerd[2002]: time="2026-04-17T23:37:22.868446657Z" level=info msg="StartContainer for \"6a8fe403d9d31cb145dbec42d7ba2ac2f35b61f93da521f76c29e39444eebf21\" returns successfully" Apr 17 23:37:22.880746 containerd[2002]: time="2026-04-17T23:37:22.879278656Z" level=info msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" Apr 17 23:37:22.880746 containerd[2002]: time="2026-04-17T23:37:22.879967850Z" level=info msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.975 [INFO][5373] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.976 [INFO][5373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" iface="eth0" netns="/var/run/netns/cni-35ae1bcd-eec9-0f3a-be8a-c77696839c39" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.981 [INFO][5373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" iface="eth0" netns="/var/run/netns/cni-35ae1bcd-eec9-0f3a-be8a-c77696839c39" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.982 [INFO][5373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" iface="eth0" netns="/var/run/netns/cni-35ae1bcd-eec9-0f3a-be8a-c77696839c39" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.982 [INFO][5373] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:22.982 [INFO][5373] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.065 [INFO][5392] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.065 [INFO][5392] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.065 [INFO][5392] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.086 [WARNING][5392] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.086 [INFO][5392] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.089 [INFO][5392] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:23.096704 containerd[2002]: 2026-04-17 23:37:23.092 [INFO][5373] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:37:23.099705 containerd[2002]: time="2026-04-17T23:37:23.099353135Z" level=info msg="TearDown network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" successfully" Apr 17 23:37:23.099705 containerd[2002]: time="2026-04-17T23:37:23.099397134Z" level=info msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" returns successfully" Apr 17 23:37:23.106196 systemd[1]: run-netns-cni\x2d35ae1bcd\x2deec9\x2d0f3a\x2dbe8a\x2dc77696839c39.mount: Deactivated successfully. Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.970 [INFO][5372] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.971 [INFO][5372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" iface="eth0" netns="/var/run/netns/cni-41fce067-c5f6-7b87-80d0-9353be0c6ef4" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.971 [INFO][5372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" iface="eth0" netns="/var/run/netns/cni-41fce067-c5f6-7b87-80d0-9353be0c6ef4" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.971 [INFO][5372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" iface="eth0" netns="/var/run/netns/cni-41fce067-c5f6-7b87-80d0-9353be0c6ef4" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.972 [INFO][5372] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:22.972 [INFO][5372] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.068 [INFO][5386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.069 [INFO][5386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.089 [INFO][5386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.109 [WARNING][5386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.109 [INFO][5386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.112 [INFO][5386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:23.118478 containerd[2002]: 2026-04-17 23:37:23.115 [INFO][5372] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:37:23.124333 containerd[2002]: time="2026-04-17T23:37:23.119345452Z" level=info msg="TearDown network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" successfully" Apr 17 23:37:23.124333 containerd[2002]: time="2026-04-17T23:37:23.119395888Z" level=info msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" returns successfully" Apr 17 23:37:23.124429 containerd[2002]: time="2026-04-17T23:37:23.124389703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8w592,Uid:e73a167e-3582-40a4-9b34-7572429fc278,Namespace:calico-system,Attempt:1,}" Apr 17 23:37:23.125733 systemd[1]: run-netns-cni\x2d41fce067\x2dc5f6\x2d7b87\x2d80d0\x2d9353be0c6ef4.mount: Deactivated successfully. Apr 17 23:37:23.129184 containerd[2002]: time="2026-04-17T23:37:23.129145942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-9b2kl,Uid:ab3820f2-82fb-4fe2-a46c-ca486562fb4d,Namespace:calico-system,Attempt:1,}" Apr 17 23:37:23.431225 systemd-networkd[1894]: calieb77bbcdef7: Link UP Apr 17 23:37:23.432408 systemd-networkd[1894]: calieb77bbcdef7: Gained carrier Apr 17 23:37:23.442676 kubelet[3202]: I0417 23:37:23.438266 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hdsmf" podStartSLOduration=55.438243706 podStartE2EDuration="55.438243706s" podCreationTimestamp="2026-04-17 23:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:23.43659518 +0000 UTC m=+62.744061065" watchObservedRunningTime="2026-04-17 23:37:23.438243706 +0000 UTC m=+62.745709570" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.266 [INFO][5410] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0 goldmane-cccfbd5cf- calico-system e73a167e-3582-40a4-9b34-7572429fc278 1034 0 2026-04-17 23:36:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-7 goldmane-cccfbd5cf-8w592 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calieb77bbcdef7 [] [] }} ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.266 [INFO][5410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.335 [INFO][5433] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" HandleID="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.346 [INFO][5433] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" HandleID="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"goldmane-cccfbd5cf-8w592", "timestamp":"2026-04-17 23:37:23.335541143 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c1080)} Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.346 [INFO][5433] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.346 [INFO][5433] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.347 [INFO][5433] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.350 [INFO][5433] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.358 [INFO][5433] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.364 [INFO][5433] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.368 [INFO][5433] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.378 [INFO][5433] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.378 [INFO][5433] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.381 [INFO][5433] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740 Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.392 [INFO][5433] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5433] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.196/26] block=192.168.14.192/26 handle="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5433] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.196/26] handle="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" host="ip-172-31-30-7" Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5433] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:23.463604 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5433] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.196/26] IPv6=[] ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" HandleID="k8s-pod-network.7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.415 [INFO][5410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e73a167e-3582-40a4-9b34-7572429fc278", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"goldmane-cccfbd5cf-8w592", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb77bbcdef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.415 [INFO][5410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.196/32] ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.417 [INFO][5410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb77bbcdef7 ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.433 [INFO][5410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.435 [INFO][5410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e73a167e-3582-40a4-9b34-7572429fc278", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740", Pod:"goldmane-cccfbd5cf-8w592", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb77bbcdef7", MAC:"da:f7:8a:19:e8:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:23.467244 containerd[2002]: 2026-04-17 23:37:23.456 [INFO][5410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8w592" WorkloadEndpoint="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:37:23.532956 systemd-networkd[1894]: calid9944baf7d9: Link UP Apr 17 23:37:23.533215 systemd-networkd[1894]: calid9944baf7d9: Gained carrier Apr 17 23:37:23.545903 containerd[2002]: time="2026-04-17T23:37:23.543607066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:23.545903 containerd[2002]: time="2026-04-17T23:37:23.543687369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:23.545903 containerd[2002]: time="2026-04-17T23:37:23.543703314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:23.545903 containerd[2002]: time="2026-04-17T23:37:23.543810804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.265 [INFO][5402] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0 calico-apiserver-846d8859d6- calico-system ab3820f2-82fb-4fe2-a46c-ca486562fb4d 1033 0 2026-04-17 23:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:846d8859d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-7 calico-apiserver-846d8859d6-9b2kl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid9944baf7d9 [] [] }} ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.265 [INFO][5402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.366 [INFO][5428] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" HandleID="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.377 [INFO][5428] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" HandleID="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364400), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"calico-apiserver-846d8859d6-9b2kl", "timestamp":"2026-04-17 23:37:23.366374336 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ce580)} Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.377 [INFO][5428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.407 [INFO][5428] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.457 [INFO][5428] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.467 [INFO][5428] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.476 [INFO][5428] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.481 [INFO][5428] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.487 [INFO][5428] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.488 [INFO][5428] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.496 [INFO][5428] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.505 [INFO][5428] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.524 [INFO][5428] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.197/26] block=192.168.14.192/26 handle="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.524 [INFO][5428] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.197/26] handle="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" host="ip-172-31-30-7" Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.524 [INFO][5428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:23.577494 containerd[2002]: 2026-04-17 23:37:23.524 [INFO][5428] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.197/26] IPv6=[] ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" HandleID="k8s-pod-network.86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.527 [INFO][5402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"ab3820f2-82fb-4fe2-a46c-ca486562fb4d", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"calico-apiserver-846d8859d6-9b2kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9944baf7d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.527 [INFO][5402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.197/32] ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.527 [INFO][5402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9944baf7d9 ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.532 [INFO][5402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.533 [INFO][5402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"ab3820f2-82fb-4fe2-a46c-ca486562fb4d", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a", Pod:"calico-apiserver-846d8859d6-9b2kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9944baf7d9", MAC:"12:ca:bd:82:c4:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:23.579838 containerd[2002]: 2026-04-17 23:37:23.564 [INFO][5402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-9b2kl" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:37:23.585118 systemd[1]: Started cri-containerd-7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740.scope - libcontainer container 7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740. Apr 17 23:37:23.628514 containerd[2002]: time="2026-04-17T23:37:23.628383497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:23.628793 containerd[2002]: time="2026-04-17T23:37:23.628565094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:23.628793 containerd[2002]: time="2026-04-17T23:37:23.628599610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:23.629039 containerd[2002]: time="2026-04-17T23:37:23.628967050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:23.675153 systemd[1]: Started cri-containerd-86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a.scope - libcontainer container 86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a. Apr 17 23:37:23.752117 systemd-networkd[1894]: caliba41b95f357: Gained IPv6LL Apr 17 23:37:23.791292 containerd[2002]: time="2026-04-17T23:37:23.791202762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8w592,Uid:e73a167e-3582-40a4-9b34-7572429fc278,Namespace:calico-system,Attempt:1,} returns sandbox id \"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740\"" Apr 17 23:37:23.840369 containerd[2002]: time="2026-04-17T23:37:23.840205849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-9b2kl,Uid:ab3820f2-82fb-4fe2-a46c-ca486562fb4d,Namespace:calico-system,Attempt:1,} returns sandbox id \"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a\"" Apr 17 23:37:23.878158 containerd[2002]: time="2026-04-17T23:37:23.877927089Z" level=info msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" Apr 17 23:37:23.878536 containerd[2002]: time="2026-04-17T23:37:23.878498174Z" level=info msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" Apr 17 23:37:24.008190 systemd-networkd[1894]: cali7302cfc6fe9: Gained IPv6LL Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.961 [INFO][5569] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.962 [INFO][5569] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" iface="eth0" netns="/var/run/netns/cni-3ea0abcf-9bca-83a1-30b8-f8bda2f298e1" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.963 [INFO][5569] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" iface="eth0" netns="/var/run/netns/cni-3ea0abcf-9bca-83a1-30b8-f8bda2f298e1" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.971 [INFO][5569] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" iface="eth0" netns="/var/run/netns/cni-3ea0abcf-9bca-83a1-30b8-f8bda2f298e1" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.971 [INFO][5569] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:23.971 [INFO][5569] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.105 [INFO][5586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.105 [INFO][5586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.105 [INFO][5586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.116 [WARNING][5586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.116 [INFO][5586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.120 [INFO][5586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:24.132376 containerd[2002]: 2026-04-17 23:37:24.127 [INFO][5569] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:37:24.133586 containerd[2002]: time="2026-04-17T23:37:24.132761713Z" level=info msg="TearDown network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" successfully" Apr 17 23:37:24.133586 containerd[2002]: time="2026-04-17T23:37:24.132798893Z" level=info msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" returns successfully" Apr 17 23:37:24.140539 containerd[2002]: time="2026-04-17T23:37:24.140310548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-lh2jg,Uid:1537c3df-d617-414a-93ca-eeed9a0ad8c4,Namespace:calico-system,Attempt:1,}" Apr 17 23:37:24.140806 systemd[1]: run-netns-cni\x2d3ea0abcf\x2d9bca\x2d83a1\x2d30b8\x2df8bda2f298e1.mount: Deactivated successfully. Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.013 [INFO][5577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.013 [INFO][5577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" iface="eth0" netns="/var/run/netns/cni-9df6ef13-6c3c-99c2-7e04-f6bf36e1876a" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.014 [INFO][5577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" iface="eth0" netns="/var/run/netns/cni-9df6ef13-6c3c-99c2-7e04-f6bf36e1876a" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.014 [INFO][5577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" iface="eth0" netns="/var/run/netns/cni-9df6ef13-6c3c-99c2-7e04-f6bf36e1876a" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.014 [INFO][5577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.014 [INFO][5577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.189 [INFO][5592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.189 [INFO][5592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.192 [INFO][5592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.221 [WARNING][5592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.222 [INFO][5592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.226 [INFO][5592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:24.235257 containerd[2002]: 2026-04-17 23:37:24.232 [INFO][5577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:37:24.240587 containerd[2002]: time="2026-04-17T23:37:24.235414023Z" level=info msg="TearDown network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" successfully" Apr 17 23:37:24.240587 containerd[2002]: time="2026-04-17T23:37:24.235446606Z" level=info msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" returns successfully" Apr 17 23:37:24.240587 containerd[2002]: time="2026-04-17T23:37:24.238330052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttlp,Uid:cdd62a40-a858-425f-a3e8-4e85787fe5f7,Namespace:calico-system,Attempt:1,}" Apr 17 23:37:24.241527 systemd[1]: run-netns-cni\x2d9df6ef13\x2d6c3c\x2d99c2\x2d7e04\x2df6bf36e1876a.mount: Deactivated successfully. Apr 17 23:37:24.548296 systemd-networkd[1894]: cali306e0539285: Link UP Apr 17 23:37:24.558369 systemd-networkd[1894]: cali306e0539285: Gained carrier Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.306 [INFO][5603] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0 calico-apiserver-846d8859d6- calico-system 1537c3df-d617-414a-93ca-eeed9a0ad8c4 1056 0 2026-04-17 23:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:846d8859d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-7 calico-apiserver-846d8859d6-lh2jg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali306e0539285 [] [] }} ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.307 [INFO][5603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.408 [INFO][5627] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" HandleID="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.433 [INFO][5627] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" HandleID="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef510), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"calico-apiserver-846d8859d6-lh2jg", "timestamp":"2026-04-17 23:37:24.408567657 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000410f20)} Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.433 [INFO][5627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.433 [INFO][5627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.434 [INFO][5627] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.439 [INFO][5627] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.452 [INFO][5627] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.472 [INFO][5627] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.476 [INFO][5627] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.487 [INFO][5627] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.487 [INFO][5627] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.491 [INFO][5627] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805 Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.508 [INFO][5627] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.533 [INFO][5627] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.198/26] block=192.168.14.192/26 handle="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.533 [INFO][5627] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.198/26] handle="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" host="ip-172-31-30-7" Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.533 [INFO][5627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:24.638907 containerd[2002]: 2026-04-17 23:37:24.533 [INFO][5627] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.198/26] IPv6=[] ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" HandleID="k8s-pod-network.cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.541 [INFO][5603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"1537c3df-d617-414a-93ca-eeed9a0ad8c4", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"calico-apiserver-846d8859d6-lh2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali306e0539285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.541 [INFO][5603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.198/32] ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.541 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali306e0539285 ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.562 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.588 [INFO][5603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"1537c3df-d617-414a-93ca-eeed9a0ad8c4", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805", Pod:"calico-apiserver-846d8859d6-lh2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali306e0539285", MAC:"52:0b:48:17:b1:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:24.640305 containerd[2002]: 2026-04-17 23:37:24.626 [INFO][5603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805" Namespace="calico-system" Pod="calico-apiserver-846d8859d6-lh2jg" WorkloadEndpoint="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:37:24.673590 systemd-networkd[1894]: cali52a6b536bed: Link UP Apr 17 23:37:24.673916 systemd-networkd[1894]: cali52a6b536bed: Gained carrier Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.379 [INFO][5619] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0 csi-node-driver- calico-system cdd62a40-a858-425f-a3e8-4e85787fe5f7 1057 0 2026-04-17 23:36:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-7 csi-node-driver-tttlp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali52a6b536bed [] [] }} ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.380 [INFO][5619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.469 [INFO][5634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" HandleID="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.490 [INFO][5634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" HandleID="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fa2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-7", "pod":"csi-node-driver-tttlp", "timestamp":"2026-04-17 23:37:24.469407697 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e14a0)} Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.490 [INFO][5634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.534 [INFO][5634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.534 [INFO][5634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.545 [INFO][5634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.561 [INFO][5634] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.585 [INFO][5634] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.593 [INFO][5634] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.603 [INFO][5634] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.603 [INFO][5634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.611 [INFO][5634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5 Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.633 [INFO][5634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.663 [INFO][5634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.199/26] block=192.168.14.192/26 handle="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.664 [INFO][5634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.199/26] handle="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" host="ip-172-31-30-7" Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.664 [INFO][5634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:24.754523 containerd[2002]: 2026-04-17 23:37:24.664 [INFO][5634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.199/26] IPv6=[] ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" HandleID="k8s-pod-network.d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.668 [INFO][5619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd62a40-a858-425f-a3e8-4e85787fe5f7", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"csi-node-driver-tttlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52a6b536bed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.668 [INFO][5619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.199/32] ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.668 [INFO][5619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52a6b536bed ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.676 [INFO][5619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.680 [INFO][5619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd62a40-a858-425f-a3e8-4e85787fe5f7", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5", Pod:"csi-node-driver-tttlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52a6b536bed", MAC:"66:20:76:b7:34:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:24.762434 containerd[2002]: 2026-04-17 23:37:24.722 [INFO][5619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5" Namespace="calico-system" Pod="csi-node-driver-tttlp" WorkloadEndpoint="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:37:24.887364 containerd[2002]: time="2026-04-17T23:37:24.886233640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:24.887364 containerd[2002]: time="2026-04-17T23:37:24.886324574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:24.887364 containerd[2002]: time="2026-04-17T23:37:24.886346645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:24.887364 containerd[2002]: time="2026-04-17T23:37:24.886456544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:24.944916 containerd[2002]: time="2026-04-17T23:37:24.931433314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:24.944916 containerd[2002]: time="2026-04-17T23:37:24.935503598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:24.944916 containerd[2002]: time="2026-04-17T23:37:24.935537424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:24.944916 containerd[2002]: time="2026-04-17T23:37:24.935678175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:25.001174 systemd[1]: Started cri-containerd-cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805.scope - libcontainer container cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805. Apr 17 23:37:25.007938 systemd[1]: Started cri-containerd-d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5.scope - libcontainer container d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5. Apr 17 23:37:25.072927 containerd[2002]: time="2026-04-17T23:37:25.072772420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttlp,Uid:cdd62a40-a858-425f-a3e8-4e85787fe5f7,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5\"" Apr 17 23:37:25.110708 containerd[2002]: time="2026-04-17T23:37:25.110449127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d8859d6-lh2jg,Uid:1537c3df-d617-414a-93ca-eeed9a0ad8c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805\"" Apr 17 23:37:25.353438 systemd-networkd[1894]: calid9944baf7d9: Gained IPv6LL Apr 17 23:37:25.481182 systemd-networkd[1894]: calieb77bbcdef7: Gained IPv6LL Apr 17 23:37:25.879156 containerd[2002]: time="2026-04-17T23:37:25.879087123Z" level=info msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" Apr 17 23:37:26.056245 systemd-networkd[1894]: cali52a6b536bed: Gained IPv6LL Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.027 [INFO][5776] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.027 [INFO][5776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" iface="eth0" netns="/var/run/netns/cni-1617e260-51cd-309e-8632-2d3e66b9425c" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.027 [INFO][5776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" iface="eth0" netns="/var/run/netns/cni-1617e260-51cd-309e-8632-2d3e66b9425c" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.028 [INFO][5776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" iface="eth0" netns="/var/run/netns/cni-1617e260-51cd-309e-8632-2d3e66b9425c" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.028 [INFO][5776] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.028 [INFO][5776] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.136 [INFO][5784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.136 [INFO][5784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.136 [INFO][5784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.150 [WARNING][5784] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.150 [INFO][5784] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.153 [INFO][5784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:26.160895 containerd[2002]: 2026-04-17 23:37:26.156 [INFO][5776] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:37:26.160895 containerd[2002]: time="2026-04-17T23:37:26.160535711Z" level=info msg="TearDown network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" successfully" Apr 17 23:37:26.165020 containerd[2002]: time="2026-04-17T23:37:26.163126683Z" level=info msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" returns successfully" Apr 17 23:37:26.167980 systemd[1]: run-netns-cni\x2d1617e260\x2d51cd\x2d309e\x2d8632\x2d2d3e66b9425c.mount: Deactivated successfully. Apr 17 23:37:26.169808 containerd[2002]: time="2026-04-17T23:37:26.169066644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cq57f,Uid:f6754cc5-f109-476a-ab6e-ba6495a198d8,Namespace:kube-system,Attempt:1,}" Apr 17 23:37:26.377995 systemd-networkd[1894]: cali306e0539285: Gained IPv6LL Apr 17 23:37:26.484140 systemd-networkd[1894]: cali6ba9b4bdd9a: Link UP Apr 17 23:37:26.484466 systemd-networkd[1894]: cali6ba9b4bdd9a: Gained carrier Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.278 [INFO][5794] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0 coredns-66bc5c9577- kube-system f6754cc5-f109-476a-ab6e-ba6495a198d8 1082 0 2026-04-17 23:36:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-7 coredns-66bc5c9577-cq57f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ba9b4bdd9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.278 [INFO][5794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.366 [INFO][5807] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" HandleID="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.383 [INFO][5807] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" HandleID="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-7", "pod":"coredns-66bc5c9577-cq57f", "timestamp":"2026-04-17 23:37:26.366644355 +0000 UTC"}, Hostname:"ip-172-31-30-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000340580)} Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.383 [INFO][5807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.383 [INFO][5807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.383 [INFO][5807] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-7' Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.388 [INFO][5807] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.400 [INFO][5807] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.414 [INFO][5807] ipam/ipam.go 526: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.428 [INFO][5807] ipam/ipam.go 160: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.436 [INFO][5807] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.437 [INFO][5807] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.442 [INFO][5807] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8 Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.450 [INFO][5807] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.464 [INFO][5807] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.14.200/26] block=192.168.14.192/26 handle="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.464 [INFO][5807] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.14.200/26] handle="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" host="ip-172-31-30-7" Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.464 [INFO][5807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:26.527324 containerd[2002]: 2026-04-17 23:37:26.466 [INFO][5807] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.14.200/26] IPv6=[] ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" HandleID="k8s-pod-network.a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.473 [INFO][5794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6754cc5-f109-476a-ab6e-ba6495a198d8", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"", Pod:"coredns-66bc5c9577-cq57f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba9b4bdd9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.473 [INFO][5794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.200/32] ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.473 [INFO][5794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ba9b4bdd9a ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.484 [INFO][5794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.485 [INFO][5794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6754cc5-f109-476a-ab6e-ba6495a198d8", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8", Pod:"coredns-66bc5c9577-cq57f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba9b4bdd9a", MAC:"7e:a2:2f:dc:c6:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:26.528459 containerd[2002]: 2026-04-17 23:37:26.517 [INFO][5794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8" Namespace="kube-system" Pod="coredns-66bc5c9577-cq57f" WorkloadEndpoint="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:37:26.721335 containerd[2002]: time="2026-04-17T23:37:26.720664561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:26.721335 containerd[2002]: time="2026-04-17T23:37:26.720757037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:26.721335 containerd[2002]: time="2026-04-17T23:37:26.720775595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:26.722308 containerd[2002]: time="2026-04-17T23:37:26.721771420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:26.843110 systemd[1]: Started cri-containerd-a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8.scope - libcontainer container a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8. Apr 17 23:37:26.846310 systemd[1]: Started sshd@8-172.31.30.7:22-20.229.252.112:52898.service - OpenSSH per-connection server daemon (20.229.252.112:52898). Apr 17 23:37:26.979529 containerd[2002]: time="2026-04-17T23:37:26.979492700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cq57f,Uid:f6754cc5-f109-476a-ab6e-ba6495a198d8,Namespace:kube-system,Attempt:1,} returns sandbox id \"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8\"" Apr 17 23:37:27.007487 containerd[2002]: time="2026-04-17T23:37:27.007348761Z" level=info msg="CreateContainer within sandbox \"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:37:27.035119 containerd[2002]: time="2026-04-17T23:37:27.034806888Z" level=info msg="CreateContainer within sandbox \"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1feb91ed109a933cb7800bb0de1fb3512f51b3d00e8a3e272cfa61e64e29fc1d\"" Apr 17 23:37:27.036656 containerd[2002]: time="2026-04-17T23:37:27.036533155Z" level=info msg="StartContainer for \"1feb91ed109a933cb7800bb0de1fb3512f51b3d00e8a3e272cfa61e64e29fc1d\"" Apr 17 23:37:27.085175 systemd[1]: Started cri-containerd-1feb91ed109a933cb7800bb0de1fb3512f51b3d00e8a3e272cfa61e64e29fc1d.scope - libcontainer container 1feb91ed109a933cb7800bb0de1fb3512f51b3d00e8a3e272cfa61e64e29fc1d. Apr 17 23:37:27.149614 containerd[2002]: time="2026-04-17T23:37:27.148377263Z" level=info msg="StartContainer for \"1feb91ed109a933cb7800bb0de1fb3512f51b3d00e8a3e272cfa61e64e29fc1d\" returns successfully" Apr 17 23:37:27.536179 kubelet[3202]: I0417 23:37:27.536035 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cq57f" podStartSLOduration=59.536007697 podStartE2EDuration="59.536007697s" podCreationTimestamp="2026-04-17 23:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:27.527085224 +0000 UTC m=+66.834551089" watchObservedRunningTime="2026-04-17 23:37:27.536007697 +0000 UTC m=+66.843473556" Apr 17 23:37:27.592905 systemd-networkd[1894]: cali6ba9b4bdd9a: Gained IPv6LL Apr 17 23:37:27.835489 containerd[2002]: time="2026-04-17T23:37:27.835412067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:37:27.839991 containerd[2002]: time="2026-04-17T23:37:27.839724718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.098501238s" Apr 17 23:37:27.839991 containerd[2002]: time="2026-04-17T23:37:27.839781025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:37:27.840187 containerd[2002]: time="2026-04-17T23:37:27.839842426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:27.841464 containerd[2002]: time="2026-04-17T23:37:27.841427015Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:27.842980 containerd[2002]: time="2026-04-17T23:37:27.842567824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:27.844257 containerd[2002]: time="2026-04-17T23:37:27.843973692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:37:27.901166 containerd[2002]: time="2026-04-17T23:37:27.901068036Z" level=info msg="CreateContainer within sandbox \"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:37:27.924438 containerd[2002]: time="2026-04-17T23:37:27.924296337Z" level=info msg="CreateContainer within sandbox \"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa\"" Apr 17 23:37:27.927705 containerd[2002]: time="2026-04-17T23:37:27.925198227Z" level=info msg="StartContainer for \"3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa\"" Apr 17 23:37:27.938404 sshd[5865]: Accepted publickey for core from 20.229.252.112 port 52898 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:27.945416 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:27.960945 systemd-logind[1963]: New session 9 of user core. Apr 17 23:37:27.964436 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:37:27.982166 systemd[1]: Started cri-containerd-3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa.scope - libcontainer container 3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa. Apr 17 23:37:28.040579 containerd[2002]: time="2026-04-17T23:37:28.040239089Z" level=info msg="StartContainer for \"3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa\" returns successfully" Apr 17 23:37:28.843539 kubelet[3202]: I0417 23:37:28.843456 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-744ddc4d96-zx6kx" podStartSLOduration=41.74195529 podStartE2EDuration="46.843428426s" podCreationTimestamp="2026-04-17 23:36:42 +0000 UTC" firstStartedPulling="2026-04-17 23:37:22.740578481 +0000 UTC m=+62.048044320" lastFinishedPulling="2026-04-17 23:37:27.842051597 +0000 UTC m=+67.149517456" observedRunningTime="2026-04-17 23:37:28.576796747 +0000 UTC m=+67.884262609" watchObservedRunningTime="2026-04-17 23:37:28.843428426 +0000 UTC m=+68.150894289" Apr 17 23:37:29.374418 sshd[5865]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:29.383482 systemd-logind[1963]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:37:29.384461 systemd[1]: sshd@8-172.31.30.7:22-20.229.252.112:52898.service: Deactivated successfully. Apr 17 23:37:29.387706 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:37:29.389770 systemd-logind[1963]: Removed session 9. Apr 17 23:37:29.808360 ntpd[1957]: Listen normally on 11 caliba41b95f357 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:37:29.808464 ntpd[1957]: Listen normally on 12 cali7302cfc6fe9 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 11 caliba41b95f357 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 12 cali7302cfc6fe9 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 13 calieb77bbcdef7 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 14 calid9944baf7d9 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 15 cali306e0539285 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 16 cali52a6b536bed [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:37:29.809171 ntpd[1957]: 17 Apr 23:37:29 ntpd[1957]: Listen normally on 17 cali6ba9b4bdd9a [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:37:29.808514 ntpd[1957]: Listen normally on 13 calieb77bbcdef7 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:37:29.808561 ntpd[1957]: Listen normally on 14 calid9944baf7d9 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:37:29.808601 ntpd[1957]: Listen normally on 15 cali306e0539285 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:37:29.808641 ntpd[1957]: Listen normally on 16 cali52a6b536bed [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:37:29.808690 ntpd[1957]: Listen normally on 17 cali6ba9b4bdd9a [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:37:32.503815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973045204.mount: Deactivated successfully. Apr 17 23:37:33.371065 containerd[2002]: time="2026-04-17T23:37:33.370995631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:33.373284 containerd[2002]: time="2026-04-17T23:37:33.373023765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:37:33.377463 containerd[2002]: time="2026-04-17T23:37:33.375895774Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:33.380271 containerd[2002]: time="2026-04-17T23:37:33.380225434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:33.381428 containerd[2002]: time="2026-04-17T23:37:33.381378562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.537360387s" Apr 17 23:37:33.381559 containerd[2002]: time="2026-04-17T23:37:33.381448626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:37:33.424622 containerd[2002]: time="2026-04-17T23:37:33.424573738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:37:33.470260 containerd[2002]: time="2026-04-17T23:37:33.470131823Z" level=info msg="CreateContainer within sandbox \"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:37:33.501057 containerd[2002]: time="2026-04-17T23:37:33.500113357Z" level=info msg="CreateContainer within sandbox \"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677\"" Apr 17 23:37:33.501332 containerd[2002]: time="2026-04-17T23:37:33.501300846Z" level=info msg="StartContainer for \"2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677\"" Apr 17 23:37:33.575266 systemd[1]: Started cri-containerd-2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677.scope - libcontainer container 2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677. Apr 17 23:37:33.665067 containerd[2002]: time="2026-04-17T23:37:33.664036985Z" level=info msg="StartContainer for \"2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677\" returns successfully" Apr 17 23:37:34.174253 kubelet[3202]: I0417 23:37:34.174050 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-8w592" podStartSLOduration=43.519836602 podStartE2EDuration="53.14952077s" podCreationTimestamp="2026-04-17 23:36:41 +0000 UTC" firstStartedPulling="2026-04-17 23:37:23.79446053 +0000 UTC m=+63.101926383" lastFinishedPulling="2026-04-17 23:37:33.424144695 +0000 UTC m=+72.731610551" observedRunningTime="2026-04-17 23:37:34.140326368 +0000 UTC m=+73.447792234" watchObservedRunningTime="2026-04-17 23:37:34.14952077 +0000 UTC m=+73.456986632" Apr 17 23:37:34.565717 systemd[1]: Started sshd@9-172.31.30.7:22-20.229.252.112:52900.service - OpenSSH per-connection server daemon (20.229.252.112:52900). Apr 17 23:37:35.185713 systemd[1]: run-containerd-runc-k8s.io-2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677-runc.EB3GVR.mount: Deactivated successfully. Apr 17 23:37:35.701778 sshd[6087]: Accepted publickey for core from 20.229.252.112 port 52900 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:35.707641 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:35.716942 systemd-logind[1963]: New session 10 of user core. Apr 17 23:37:35.721947 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:37:36.184786 systemd[1]: run-containerd-runc-k8s.io-2f313fd93b032e3ba2f5c5448cb98ee4ab2d229dc05953756b5dd537b23d0677-runc.QueNhr.mount: Deactivated successfully. Apr 17 23:37:37.232498 sshd[6087]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:37.239478 systemd[1]: sshd@9-172.31.30.7:22-20.229.252.112:52900.service: Deactivated successfully. Apr 17 23:37:37.245422 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:37:37.247549 systemd-logind[1963]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:37:37.250115 systemd-logind[1963]: Removed session 10. Apr 17 23:37:37.327076 containerd[2002]: time="2026-04-17T23:37:37.327006772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:37.328747 containerd[2002]: time="2026-04-17T23:37:37.328574964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:37:37.331584 containerd[2002]: time="2026-04-17T23:37:37.331101797Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:37.335582 containerd[2002]: time="2026-04-17T23:37:37.335502833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:37.336821 containerd[2002]: time="2026-04-17T23:37:37.336770429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.912147387s" Apr 17 23:37:37.336821 containerd[2002]: time="2026-04-17T23:37:37.336820691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:37:37.372277 containerd[2002]: time="2026-04-17T23:37:37.372219279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:37:37.443687 containerd[2002]: time="2026-04-17T23:37:37.443629049Z" level=info msg="CreateContainer within sandbox \"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:37:37.471161 containerd[2002]: time="2026-04-17T23:37:37.471107944Z" level=info msg="CreateContainer within sandbox \"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfe5a061c736041822021ab236c1a8e0e911c8e3212710f1c10134fe79a0bc33\"" Apr 17 23:37:37.474550 containerd[2002]: time="2026-04-17T23:37:37.472043769Z" level=info msg="StartContainer for \"cfe5a061c736041822021ab236c1a8e0e911c8e3212710f1c10134fe79a0bc33\"" Apr 17 23:37:37.723220 systemd[1]: Started cri-containerd-cfe5a061c736041822021ab236c1a8e0e911c8e3212710f1c10134fe79a0bc33.scope - libcontainer container cfe5a061c736041822021ab236c1a8e0e911c8e3212710f1c10134fe79a0bc33. Apr 17 23:37:37.858044 containerd[2002]: time="2026-04-17T23:37:37.857979849Z" level=info msg="StartContainer for \"cfe5a061c736041822021ab236c1a8e0e911c8e3212710f1c10134fe79a0bc33\" returns successfully" Apr 17 23:37:38.484191 kubelet[3202]: I0417 23:37:38.465894 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-846d8859d6-9b2kl" podStartSLOduration=44.916384596 podStartE2EDuration="58.446652343s" podCreationTimestamp="2026-04-17 23:36:40 +0000 UTC" firstStartedPulling="2026-04-17 23:37:23.845264514 +0000 UTC m=+63.152730353" lastFinishedPulling="2026-04-17 23:37:37.375532218 +0000 UTC m=+76.682998100" observedRunningTime="2026-04-17 23:37:38.36787629 +0000 UTC m=+77.675342143" watchObservedRunningTime="2026-04-17 23:37:38.446652343 +0000 UTC m=+77.754118206" Apr 17 23:37:39.183279 containerd[2002]: time="2026-04-17T23:37:39.183219853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:39.187749 containerd[2002]: time="2026-04-17T23:37:39.186210785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:37:39.188989 containerd[2002]: time="2026-04-17T23:37:39.188948589Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:39.195407 containerd[2002]: time="2026-04-17T23:37:39.195356117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:39.198568 containerd[2002]: time="2026-04-17T23:37:39.197838548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.825569043s" Apr 17 23:37:39.198723 containerd[2002]: time="2026-04-17T23:37:39.198582988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:37:39.200576 containerd[2002]: time="2026-04-17T23:37:39.200362032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:37:39.255439 containerd[2002]: time="2026-04-17T23:37:39.255382705Z" level=info msg="CreateContainer within sandbox \"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:37:39.330545 containerd[2002]: time="2026-04-17T23:37:39.330491703Z" level=info msg="CreateContainer within sandbox \"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"80541ade3029a4b408d519b40b8665451069774173ef72abff3484781a71cf5c\"" Apr 17 23:37:39.333220 containerd[2002]: time="2026-04-17T23:37:39.333176526Z" level=info msg="StartContainer for \"80541ade3029a4b408d519b40b8665451069774173ef72abff3484781a71cf5c\"" Apr 17 23:37:39.591106 systemd[1]: Started cri-containerd-80541ade3029a4b408d519b40b8665451069774173ef72abff3484781a71cf5c.scope - libcontainer container 80541ade3029a4b408d519b40b8665451069774173ef72abff3484781a71cf5c. Apr 17 23:37:39.620773 containerd[2002]: time="2026-04-17T23:37:39.619381890Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:39.627376 containerd[2002]: time="2026-04-17T23:37:39.627312229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:37:39.640309 containerd[2002]: time="2026-04-17T23:37:39.638672656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 438.257126ms" Apr 17 23:37:39.640309 containerd[2002]: time="2026-04-17T23:37:39.639164192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:37:39.665120 containerd[2002]: time="2026-04-17T23:37:39.664589085Z" level=info msg="CreateContainer within sandbox \"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:37:39.748360 containerd[2002]: time="2026-04-17T23:37:39.746965701Z" level=info msg="StartContainer for \"80541ade3029a4b408d519b40b8665451069774173ef72abff3484781a71cf5c\" returns successfully" Apr 17 23:37:39.771537 containerd[2002]: time="2026-04-17T23:37:39.771477935Z" level=info msg="CreateContainer within sandbox \"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7813baf2feead65f01d10e3cd6ab5a8f7546d7e5f0431378aed0aafbf48d244d\"" Apr 17 23:37:39.772577 containerd[2002]: time="2026-04-17T23:37:39.772299324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:37:39.776871 containerd[2002]: time="2026-04-17T23:37:39.775630584Z" level=info msg="StartContainer for \"7813baf2feead65f01d10e3cd6ab5a8f7546d7e5f0431378aed0aafbf48d244d\"" Apr 17 23:37:39.965564 systemd[1]: Started cri-containerd-7813baf2feead65f01d10e3cd6ab5a8f7546d7e5f0431378aed0aafbf48d244d.scope - libcontainer container 7813baf2feead65f01d10e3cd6ab5a8f7546d7e5f0431378aed0aafbf48d244d. Apr 17 23:37:40.140473 containerd[2002]: time="2026-04-17T23:37:40.140420906Z" level=info msg="StartContainer for \"7813baf2feead65f01d10e3cd6ab5a8f7546d7e5f0431378aed0aafbf48d244d\" returns successfully" Apr 17 23:37:40.254035 kubelet[3202]: I0417 23:37:40.253608 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:41.258351 kubelet[3202]: I0417 23:37:41.258067 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:37:41.358959 kubelet[3202]: I0417 23:37:41.354264 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-846d8859d6-lh2jg" podStartSLOduration=46.821054501 podStartE2EDuration="1m1.354239925s" podCreationTimestamp="2026-04-17 23:36:40 +0000 UTC" firstStartedPulling="2026-04-17 23:37:25.112704761 +0000 UTC m=+64.420170602" lastFinishedPulling="2026-04-17 23:37:39.645890173 +0000 UTC m=+78.953356026" observedRunningTime="2026-04-17 23:37:40.326721178 +0000 UTC m=+79.634187053" watchObservedRunningTime="2026-04-17 23:37:41.354239925 +0000 UTC m=+80.661705790" Apr 17 23:37:42.446310 systemd[1]: Started sshd@10-172.31.30.7:22-20.229.252.112:38568.service - OpenSSH per-connection server daemon (20.229.252.112:38568). Apr 17 23:37:43.574886 sshd[6335]: Accepted publickey for core from 20.229.252.112 port 38568 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:43.581816 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:43.616693 systemd-logind[1963]: New session 11 of user core. Apr 17 23:37:43.623141 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:37:43.665889 containerd[2002]: time="2026-04-17T23:37:43.665808494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:37:43.668899 containerd[2002]: time="2026-04-17T23:37:43.668825312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:43.675729 containerd[2002]: time="2026-04-17T23:37:43.675588917Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:43.677998 containerd[2002]: time="2026-04-17T23:37:43.677613305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.905263925s" Apr 17 23:37:43.677998 containerd[2002]: time="2026-04-17T23:37:43.677668624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:37:43.680420 containerd[2002]: time="2026-04-17T23:37:43.678795539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:44.071956 containerd[2002]: time="2026-04-17T23:37:44.071896646Z" level=info msg="CreateContainer within sandbox \"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:37:44.093215 containerd[2002]: time="2026-04-17T23:37:44.093016494Z" level=info msg="CreateContainer within sandbox \"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"916ce0ff6dda1dee6094677caee7b7208ebecd0a2e7eaa7e605a5d29f8dccdf9\"" Apr 17 23:37:44.094333 containerd[2002]: time="2026-04-17T23:37:44.094282253Z" level=info msg="StartContainer for \"916ce0ff6dda1dee6094677caee7b7208ebecd0a2e7eaa7e605a5d29f8dccdf9\"" Apr 17 23:37:44.306655 systemd[1]: Started cri-containerd-916ce0ff6dda1dee6094677caee7b7208ebecd0a2e7eaa7e605a5d29f8dccdf9.scope - libcontainer container 916ce0ff6dda1dee6094677caee7b7208ebecd0a2e7eaa7e605a5d29f8dccdf9. Apr 17 23:37:44.429762 containerd[2002]: time="2026-04-17T23:37:44.428840877Z" level=info msg="StartContainer for \"916ce0ff6dda1dee6094677caee7b7208ebecd0a2e7eaa7e605a5d29f8dccdf9\" returns successfully" Apr 17 23:37:45.197809 sshd[6335]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:45.206127 systemd-logind[1963]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:37:45.206518 systemd[1]: sshd@10-172.31.30.7:22-20.229.252.112:38568.service: Deactivated successfully. Apr 17 23:37:45.209440 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:37:45.218018 systemd-logind[1963]: Removed session 11. Apr 17 23:37:45.252547 kubelet[3202]: I0417 23:37:45.249587 3202 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:37:45.258001 kubelet[3202]: I0417 23:37:45.257954 3202 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:37:45.384298 systemd[1]: Started sshd@11-172.31.30.7:22-20.229.252.112:35242.service - OpenSSH per-connection server daemon (20.229.252.112:35242). Apr 17 23:37:45.498952 kubelet[3202]: I0417 23:37:45.495667 3202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tttlp" podStartSLOduration=44.800806695 podStartE2EDuration="1m3.481011486s" podCreationTimestamp="2026-04-17 23:36:42 +0000 UTC" firstStartedPulling="2026-04-17 23:37:25.087400655 +0000 UTC m=+64.394866500" lastFinishedPulling="2026-04-17 23:37:43.767605436 +0000 UTC m=+83.075071291" observedRunningTime="2026-04-17 23:37:45.479751496 +0000 UTC m=+84.787217358" watchObservedRunningTime="2026-04-17 23:37:45.481011486 +0000 UTC m=+84.788477345" Apr 17 23:37:46.449668 sshd[6395]: Accepted publickey for core from 20.229.252.112 port 35242 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:46.450436 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:46.455896 systemd-logind[1963]: New session 12 of user core. Apr 17 23:37:46.460082 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:37:47.394550 sshd[6395]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:47.401746 systemd[1]: sshd@11-172.31.30.7:22-20.229.252.112:35242.service: Deactivated successfully. Apr 17 23:37:47.405433 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:37:47.407516 systemd-logind[1963]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:37:47.409297 systemd-logind[1963]: Removed session 12. Apr 17 23:37:47.583339 systemd[1]: Started sshd@12-172.31.30.7:22-20.229.252.112:35252.service - OpenSSH per-connection server daemon (20.229.252.112:35252). Apr 17 23:37:48.670930 sshd[6411]: Accepted publickey for core from 20.229.252.112 port 35252 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:48.675038 sshd[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:48.688050 systemd-logind[1963]: New session 13 of user core. Apr 17 23:37:48.693116 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:37:49.499121 sshd[6411]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:49.505479 systemd[1]: sshd@12-172.31.30.7:22-20.229.252.112:35252.service: Deactivated successfully. Apr 17 23:37:49.508618 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:37:49.509672 systemd-logind[1963]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:37:49.511348 systemd-logind[1963]: Removed session 13. Apr 17 23:37:54.663592 systemd[1]: Started sshd@13-172.31.30.7:22-20.229.252.112:35260.service - OpenSSH per-connection server daemon (20.229.252.112:35260). Apr 17 23:37:55.735407 sshd[6426]: Accepted publickey for core from 20.229.252.112 port 35260 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:55.737920 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:55.744498 systemd-logind[1963]: New session 14 of user core. Apr 17 23:37:55.750148 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:37:56.597616 sshd[6426]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:56.606227 systemd-logind[1963]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:37:56.607026 systemd[1]: sshd@13-172.31.30.7:22-20.229.252.112:35260.service: Deactivated successfully. Apr 17 23:37:56.609694 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:37:56.612112 systemd-logind[1963]: Removed session 14. Apr 17 23:37:56.770297 systemd[1]: Started sshd@14-172.31.30.7:22-20.229.252.112:39042.service - OpenSSH per-connection server daemon (20.229.252.112:39042). Apr 17 23:37:57.747791 sshd[6453]: Accepted publickey for core from 20.229.252.112 port 39042 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:57.749610 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:57.755313 systemd-logind[1963]: New session 15 of user core. Apr 17 23:37:57.760349 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:37:59.040425 sshd[6453]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:59.049206 systemd[1]: sshd@14-172.31.30.7:22-20.229.252.112:39042.service: Deactivated successfully. Apr 17 23:37:59.052529 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:37:59.055112 systemd-logind[1963]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:37:59.057044 systemd-logind[1963]: Removed session 15. Apr 17 23:37:59.217196 systemd[1]: Started sshd@15-172.31.30.7:22-20.229.252.112:39058.service - OpenSSH per-connection server daemon (20.229.252.112:39058). Apr 17 23:37:59.449839 kubelet[3202]: I0417 23:37:59.448844 3202 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:00.321177 sshd[6486]: Accepted publickey for core from 20.229.252.112 port 39058 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:00.328501 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:00.339561 systemd-logind[1963]: New session 16 of user core. Apr 17 23:38:00.345150 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:38:02.805528 sshd[6486]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:02.843589 systemd[1]: sshd@15-172.31.30.7:22-20.229.252.112:39058.service: Deactivated successfully. Apr 17 23:38:02.858779 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:38:02.881996 systemd-logind[1963]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:38:02.899446 systemd-logind[1963]: Removed session 16. Apr 17 23:38:03.012370 systemd[1]: Started sshd@16-172.31.30.7:22-20.229.252.112:39070.service - OpenSSH per-connection server daemon (20.229.252.112:39070). Apr 17 23:38:04.315815 sshd[6512]: Accepted publickey for core from 20.229.252.112 port 39070 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:04.320206 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:04.327939 systemd-logind[1963]: New session 17 of user core. Apr 17 23:38:04.334119 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:38:05.800931 sshd[6512]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:05.805259 systemd[1]: sshd@16-172.31.30.7:22-20.229.252.112:39070.service: Deactivated successfully. Apr 17 23:38:05.807944 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:38:05.809670 systemd-logind[1963]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:38:05.812455 systemd-logind[1963]: Removed session 17. Apr 17 23:38:05.967228 systemd[1]: Started sshd@17-172.31.30.7:22-20.229.252.112:34914.service - OpenSSH per-connection server daemon (20.229.252.112:34914). Apr 17 23:38:07.007904 sshd[6526]: Accepted publickey for core from 20.229.252.112 port 34914 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:07.011592 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:07.019415 systemd-logind[1963]: New session 18 of user core. Apr 17 23:38:07.026079 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:38:08.080236 sshd[6526]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:08.085305 systemd-logind[1963]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:38:08.086300 systemd[1]: sshd@17-172.31.30.7:22-20.229.252.112:34914.service: Deactivated successfully. Apr 17 23:38:08.088704 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:38:08.089878 systemd-logind[1963]: Removed session 18. Apr 17 23:38:13.259317 systemd[1]: Started sshd@18-172.31.30.7:22-20.229.252.112:34926.service - OpenSSH per-connection server daemon (20.229.252.112:34926). Apr 17 23:38:14.330635 sshd[6588]: Accepted publickey for core from 20.229.252.112 port 34926 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:14.334317 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:14.348310 systemd-logind[1963]: New session 19 of user core. Apr 17 23:38:14.356078 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:38:15.397777 sshd[6588]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:15.404552 systemd-logind[1963]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:38:15.404975 systemd[1]: sshd@18-172.31.30.7:22-20.229.252.112:34926.service: Deactivated successfully. Apr 17 23:38:15.411540 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:38:15.418221 systemd-logind[1963]: Removed session 19. Apr 17 23:38:20.579495 systemd[1]: Started sshd@19-172.31.30.7:22-20.229.252.112:49810.service - OpenSSH per-connection server daemon (20.229.252.112:49810). Apr 17 23:38:21.449452 containerd[2002]: time="2026-04-17T23:38:21.427616888Z" level=info msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" Apr 17 23:38:21.669197 sshd[6601]: Accepted publickey for core from 20.229.252.112 port 49810 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:21.674008 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:21.688809 systemd-logind[1963]: New session 20 of user core. Apr 17 23:38:21.693187 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.038 [WARNING][6613] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0", GenerateName:"calico-kube-controllers-744ddc4d96-", Namespace:"calico-system", SelfLink:"", UID:"b369e486-7b42-48cf-8775-02be039bd5a7", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744ddc4d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b", Pod:"calico-kube-controllers-744ddc4d96-zx6kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba41b95f357", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.045 [INFO][6613] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.045 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" iface="eth0" netns="" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.046 [INFO][6613] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.046 [INFO][6613] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.445 [INFO][6621] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.454 [INFO][6621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.455 [INFO][6621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.473 [WARNING][6621] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.473 [INFO][6621] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.477 [INFO][6621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:22.496668 containerd[2002]: 2026-04-17 23:38:22.491 [INFO][6613] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.550000 containerd[2002]: time="2026-04-17T23:38:22.548424689Z" level=info msg="TearDown network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" successfully" Apr 17 23:38:22.550000 containerd[2002]: time="2026-04-17T23:38:22.548480980Z" level=info msg="StopPodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" returns successfully" Apr 17 23:38:22.567571 containerd[2002]: time="2026-04-17T23:38:22.567528632Z" level=info msg="RemovePodSandbox for \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" Apr 17 23:38:22.576248 containerd[2002]: time="2026-04-17T23:38:22.576178097Z" level=info msg="Forcibly stopping sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\"" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.709 [WARNING][6642] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0", GenerateName:"calico-kube-controllers-744ddc4d96-", Namespace:"calico-system", SelfLink:"", UID:"b369e486-7b42-48cf-8775-02be039bd5a7", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744ddc4d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"081198dfa48128d54b5c51af0cef033e0e2e2060f508701c8d46884b422d9d3b", Pod:"calico-kube-controllers-744ddc4d96-zx6kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba41b95f357", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.709 [INFO][6642] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.709 [INFO][6642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" iface="eth0" netns="" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.711 [INFO][6642] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.711 [INFO][6642] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.755 [INFO][6649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.755 [INFO][6649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.755 [INFO][6649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.764 [WARNING][6649] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.765 [INFO][6649] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" HandleID="k8s-pod-network.fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Workload="ip--172--31--30--7-k8s-calico--kube--controllers--744ddc4d96--zx6kx-eth0" Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.767 [INFO][6649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:22.777074 containerd[2002]: 2026-04-17 23:38:22.772 [INFO][6642] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea" Apr 17 23:38:22.783598 containerd[2002]: time="2026-04-17T23:38:22.778042487Z" level=info msg="TearDown network for sandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" successfully" Apr 17 23:38:23.018418 containerd[2002]: time="2026-04-17T23:38:23.018250935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:23.018418 containerd[2002]: time="2026-04-17T23:38:23.018342146Z" level=info msg="RemovePodSandbox \"fbe299da30f533b6c73e7156450826975a25ac97662d048010af3fd6f6b5c8ea\" returns successfully" Apr 17 23:38:23.023548 containerd[2002]: time="2026-04-17T23:38:23.023251779Z" level=info msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.131 [WARNING][6665] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"ab3820f2-82fb-4fe2-a46c-ca486562fb4d", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a", Pod:"calico-apiserver-846d8859d6-9b2kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9944baf7d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.131 [INFO][6665] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.131 [INFO][6665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" iface="eth0" netns="" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.131 [INFO][6665] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.131 [INFO][6665] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.180 [INFO][6672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.180 [INFO][6672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.180 [INFO][6672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.197 [WARNING][6672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.197 [INFO][6672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.200 [INFO][6672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.208130 containerd[2002]: 2026-04-17 23:38:23.204 [INFO][6665] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.211903 containerd[2002]: time="2026-04-17T23:38:23.208241265Z" level=info msg="TearDown network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" successfully" Apr 17 23:38:23.211903 containerd[2002]: time="2026-04-17T23:38:23.208319482Z" level=info msg="StopPodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" returns successfully" Apr 17 23:38:23.246562 containerd[2002]: time="2026-04-17T23:38:23.246475503Z" level=info msg="RemovePodSandbox for \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" Apr 17 23:38:23.246562 containerd[2002]: time="2026-04-17T23:38:23.246561386Z" level=info msg="Forcibly stopping sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\"" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.303 [WARNING][6686] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"ab3820f2-82fb-4fe2-a46c-ca486562fb4d", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"86d51053f297fcd376700414e7601f6f5e67005ce59846d614f93b9e8d43e94a", Pod:"calico-apiserver-846d8859d6-9b2kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid9944baf7d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.304 [INFO][6686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.304 [INFO][6686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" iface="eth0" netns="" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.304 [INFO][6686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.304 [INFO][6686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.356 [INFO][6693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.357 [INFO][6693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.357 [INFO][6693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.367 [WARNING][6693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.367 [INFO][6693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" HandleID="k8s-pod-network.8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--9b2kl-eth0" Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.370 [INFO][6693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.377575 containerd[2002]: 2026-04-17 23:38:23.374 [INFO][6686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d" Apr 17 23:38:23.379317 containerd[2002]: time="2026-04-17T23:38:23.377626370Z" level=info msg="TearDown network for sandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" successfully" Apr 17 23:38:23.389178 containerd[2002]: time="2026-04-17T23:38:23.388908437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:23.389178 containerd[2002]: time="2026-04-17T23:38:23.389004549Z" level=info msg="RemovePodSandbox \"8fde6540e13bdbe93b9bf4f750a74783e03434f6c2a8ed33be5c84c5bcccb36d\" returns successfully" Apr 17 23:38:23.390341 containerd[2002]: time="2026-04-17T23:38:23.389895136Z" level=info msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.447 [WARNING][6708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"1537c3df-d617-414a-93ca-eeed9a0ad8c4", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805", Pod:"calico-apiserver-846d8859d6-lh2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali306e0539285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.448 [INFO][6708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.448 [INFO][6708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" iface="eth0" netns="" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.448 [INFO][6708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.448 [INFO][6708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.490 [INFO][6715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.491 [INFO][6715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.491 [INFO][6715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.511 [WARNING][6715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.511 [INFO][6715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.514 [INFO][6715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.522994 containerd[2002]: 2026-04-17 23:38:23.517 [INFO][6708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.522994 containerd[2002]: time="2026-04-17T23:38:23.521177015Z" level=info msg="TearDown network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" successfully" Apr 17 23:38:23.522994 containerd[2002]: time="2026-04-17T23:38:23.521208121Z" level=info msg="StopPodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" returns successfully" Apr 17 23:38:23.526815 containerd[2002]: time="2026-04-17T23:38:23.526746993Z" level=info msg="RemovePodSandbox for \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" Apr 17 23:38:23.527011 containerd[2002]: time="2026-04-17T23:38:23.526979654Z" level=info msg="Forcibly stopping sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\"" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.581 [WARNING][6729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0", GenerateName:"calico-apiserver-846d8859d6-", Namespace:"calico-system", SelfLink:"", UID:"1537c3df-d617-414a-93ca-eeed9a0ad8c4", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d8859d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"cf2d775bb9869bcb65fe68f485d01e4b6d331f132e2d38a54cffc8e10d56d805", Pod:"calico-apiserver-846d8859d6-lh2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali306e0539285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.582 [INFO][6729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.582 [INFO][6729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" iface="eth0" netns="" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.582 [INFO][6729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.582 [INFO][6729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.621 [INFO][6736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.621 [INFO][6736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.621 [INFO][6736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.636 [WARNING][6736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.636 [INFO][6736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" HandleID="k8s-pod-network.9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Workload="ip--172--31--30--7-k8s-calico--apiserver--846d8859d6--lh2jg-eth0" Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.640 [INFO][6736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.648553 containerd[2002]: 2026-04-17 23:38:23.645 [INFO][6729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772" Apr 17 23:38:23.648553 containerd[2002]: time="2026-04-17T23:38:23.648164895Z" level=info msg="TearDown network for sandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" successfully" Apr 17 23:38:23.659360 containerd[2002]: time="2026-04-17T23:38:23.658726683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:23.659360 containerd[2002]: time="2026-04-17T23:38:23.658829039Z" level=info msg="RemovePodSandbox \"9aaf1d21ccb6e786575f089260bef0e2361824581a7dfb122c5633c52e7b6772\" returns successfully" Apr 17 23:38:23.660148 containerd[2002]: time="2026-04-17T23:38:23.659420722Z" level=info msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" Apr 17 23:38:23.689884 sshd[6601]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:23.711414 systemd[1]: sshd@19-172.31.30.7:22-20.229.252.112:49810.service: Deactivated successfully. Apr 17 23:38:23.716301 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:38:23.720648 systemd-logind[1963]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:38:23.724357 systemd-logind[1963]: Removed session 20. Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.744 [WARNING][6750] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd62a40-a858-425f-a3e8-4e85787fe5f7", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5", Pod:"csi-node-driver-tttlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52a6b536bed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.744 [INFO][6750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.744 [INFO][6750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" iface="eth0" netns="" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.744 [INFO][6750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.744 [INFO][6750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.772 [INFO][6759] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.772 [INFO][6759] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.773 [INFO][6759] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.782 [WARNING][6759] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.782 [INFO][6759] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.786 [INFO][6759] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.792394 containerd[2002]: 2026-04-17 23:38:23.789 [INFO][6750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.792394 containerd[2002]: time="2026-04-17T23:38:23.792287104Z" level=info msg="TearDown network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" successfully" Apr 17 23:38:23.792394 containerd[2002]: time="2026-04-17T23:38:23.792316352Z" level=info msg="StopPodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" returns successfully" Apr 17 23:38:23.796062 containerd[2002]: time="2026-04-17T23:38:23.793831661Z" level=info msg="RemovePodSandbox for \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" Apr 17 23:38:23.796062 containerd[2002]: time="2026-04-17T23:38:23.793900035Z" level=info msg="Forcibly stopping sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\"" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.852 [WARNING][6773] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd62a40-a858-425f-a3e8-4e85787fe5f7", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"d8950fc32e676cc9338179b55996a89f11491d176e3d3bf1855b5118595ecca5", Pod:"csi-node-driver-tttlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52a6b536bed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.852 [INFO][6773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.852 [INFO][6773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" iface="eth0" netns="" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.852 [INFO][6773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.852 [INFO][6773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.885 [INFO][6781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.885 [INFO][6781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.885 [INFO][6781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.893 [WARNING][6781] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.893 [INFO][6781] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" HandleID="k8s-pod-network.b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Workload="ip--172--31--30--7-k8s-csi--node--driver--tttlp-eth0" Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.895 [INFO][6781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:23.899553 containerd[2002]: 2026-04-17 23:38:23.897 [INFO][6773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6" Apr 17 23:38:23.901700 containerd[2002]: time="2026-04-17T23:38:23.899937094Z" level=info msg="TearDown network for sandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" successfully" Apr 17 23:38:23.909070 containerd[2002]: time="2026-04-17T23:38:23.909011366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:23.936291 containerd[2002]: time="2026-04-17T23:38:23.936147837Z" level=info msg="RemovePodSandbox \"b2d728091904a5c43300098c8ab8223d652b585227d379b4ed853971ae3874a6\" returns successfully" Apr 17 23:38:23.937288 containerd[2002]: time="2026-04-17T23:38:23.936911309Z" level=info msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.023 [WARNING][6795] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5afec07c-296f-444d-884a-ca8b664e1c97", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff", Pod:"coredns-66bc5c9577-hdsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7302cfc6fe9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.024 [INFO][6795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.024 [INFO][6795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" iface="eth0" netns="" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.024 [INFO][6795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.024 [INFO][6795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.054 [INFO][6802] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.054 [INFO][6802] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.054 [INFO][6802] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.065 [WARNING][6802] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.065 [INFO][6802] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.070 [INFO][6802] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:24.075273 containerd[2002]: 2026-04-17 23:38:24.072 [INFO][6795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.075273 containerd[2002]: time="2026-04-17T23:38:24.075119487Z" level=info msg="TearDown network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" successfully" Apr 17 23:38:24.075273 containerd[2002]: time="2026-04-17T23:38:24.075161705Z" level=info msg="StopPodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" returns successfully" Apr 17 23:38:24.077086 containerd[2002]: time="2026-04-17T23:38:24.075737095Z" level=info msg="RemovePodSandbox for \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" Apr 17 23:38:24.077086 containerd[2002]: time="2026-04-17T23:38:24.075777751Z" level=info msg="Forcibly stopping sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\"" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.121 [WARNING][6816] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5afec07c-296f-444d-884a-ca8b664e1c97", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a9e5c5abf0775fdded4a434fec633c347ab9be8a5dd728eaba032bbed08b4eff", Pod:"coredns-66bc5c9577-hdsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7302cfc6fe9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.121 [INFO][6816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.121 [INFO][6816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" iface="eth0" netns="" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.121 [INFO][6816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.121 [INFO][6816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.155 [INFO][6823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.156 [INFO][6823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.157 [INFO][6823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.169 [WARNING][6823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.169 [INFO][6823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" HandleID="k8s-pod-network.b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--hdsmf-eth0" Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.172 [INFO][6823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:24.184939 containerd[2002]: 2026-04-17 23:38:24.178 [INFO][6816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c" Apr 17 23:38:24.186364 containerd[2002]: time="2026-04-17T23:38:24.184985054Z" level=info msg="TearDown network for sandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" successfully" Apr 17 23:38:24.401991 containerd[2002]: time="2026-04-17T23:38:24.401076687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:24.401991 containerd[2002]: time="2026-04-17T23:38:24.401168077Z" level=info msg="RemovePodSandbox \"b880ad6a51edabedc2b284f1efb830d5c6a151b8c1fdc929e4ca90d833e6ec2c\" returns successfully" Apr 17 23:38:24.401991 containerd[2002]: time="2026-04-17T23:38:24.401701887Z" level=info msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.463 [WARNING][6837] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6754cc5-f109-476a-ab6e-ba6495a198d8", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8", Pod:"coredns-66bc5c9577-cq57f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba9b4bdd9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.464 [INFO][6837] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.464 [INFO][6837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" iface="eth0" netns="" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.464 [INFO][6837] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.464 [INFO][6837] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.534 [INFO][6845] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.534 [INFO][6845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.534 [INFO][6845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.546 [WARNING][6845] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.547 [INFO][6845] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.556 [INFO][6845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:24.574898 containerd[2002]: 2026-04-17 23:38:24.563 [INFO][6837] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.574898 containerd[2002]: time="2026-04-17T23:38:24.574761531Z" level=info msg="TearDown network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" successfully" Apr 17 23:38:24.574898 containerd[2002]: time="2026-04-17T23:38:24.574809811Z" level=info msg="StopPodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" returns successfully" Apr 17 23:38:24.576454 containerd[2002]: time="2026-04-17T23:38:24.576409906Z" level=info msg="RemovePodSandbox for \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" Apr 17 23:38:24.576598 containerd[2002]: time="2026-04-17T23:38:24.576459473Z" level=info msg="Forcibly stopping sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\"" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.646 [WARNING][6860] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6754cc5-f109-476a-ab6e-ba6495a198d8", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"a0f5ca7798fd0d68d814724d72e9d89d7f2b9136a9ba978cbc98ec9578fc61b8", Pod:"coredns-66bc5c9577-cq57f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba9b4bdd9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.647 [INFO][6860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.647 [INFO][6860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" iface="eth0" netns="" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.647 [INFO][6860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.647 [INFO][6860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.688 [INFO][6867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.688 [INFO][6867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.688 [INFO][6867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.705 [WARNING][6867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.705 [INFO][6867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" HandleID="k8s-pod-network.ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Workload="ip--172--31--30--7-k8s-coredns--66bc5c9577--cq57f-eth0" Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.708 [INFO][6867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:24.716682 containerd[2002]: 2026-04-17 23:38:24.713 [INFO][6860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644" Apr 17 23:38:24.716682 containerd[2002]: time="2026-04-17T23:38:24.716617063Z" level=info msg="TearDown network for sandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" successfully" Apr 17 23:38:24.726317 containerd[2002]: time="2026-04-17T23:38:24.726256585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:24.726674 containerd[2002]: time="2026-04-17T23:38:24.726359042Z" level=info msg="RemovePodSandbox \"ac540d53db60085a838847f85c509304d59edfe5dcd5e097464bd8d486e9f644\" returns successfully" Apr 17 23:38:24.727424 containerd[2002]: time="2026-04-17T23:38:24.727370646Z" level=info msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.780 [WARNING][6881] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e73a167e-3582-40a4-9b34-7572429fc278", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740", Pod:"goldmane-cccfbd5cf-8w592", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb77bbcdef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.780 [INFO][6881] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.780 [INFO][6881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" iface="eth0" netns="" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.780 [INFO][6881] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.780 [INFO][6881] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.808 [INFO][6888] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.809 [INFO][6888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.809 [INFO][6888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.816 [WARNING][6888] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.816 [INFO][6888] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.820 [INFO][6888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:24.825380 containerd[2002]: 2026-04-17 23:38:24.822 [INFO][6881] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:24.833297 containerd[2002]: time="2026-04-17T23:38:24.825418805Z" level=info msg="TearDown network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" successfully" Apr 17 23:38:24.833297 containerd[2002]: time="2026-04-17T23:38:24.825451530Z" level=info msg="StopPodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" returns successfully" Apr 17 23:38:24.833297 containerd[2002]: time="2026-04-17T23:38:24.826102074Z" level=info msg="RemovePodSandbox for \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" Apr 17 23:38:24.833297 containerd[2002]: time="2026-04-17T23:38:24.826130200Z" level=info msg="Forcibly stopping sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\"" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.911 [WARNING][6902] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e73a167e-3582-40a4-9b34-7572429fc278", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-7", ContainerID:"7a99383dadf115cfef15c3d8b517840d51a2d947290748fe6bcc033af45b2740", Pod:"goldmane-cccfbd5cf-8w592", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieb77bbcdef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.911 [INFO][6902] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.912 [INFO][6902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" iface="eth0" netns="" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.912 [INFO][6902] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.912 [INFO][6902] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.982 [INFO][6910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.982 [INFO][6910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:24.982 [INFO][6910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:25.000 [WARNING][6910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:25.000 [INFO][6910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" HandleID="k8s-pod-network.b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Workload="ip--172--31--30--7-k8s-goldmane--cccfbd5cf--8w592-eth0" Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:25.010 [INFO][6910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:25.030271 containerd[2002]: 2026-04-17 23:38:25.024 [INFO][6902] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89" Apr 17 23:38:25.030271 containerd[2002]: time="2026-04-17T23:38:25.030090911Z" level=info msg="TearDown network for sandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" successfully" Apr 17 23:38:25.044031 containerd[2002]: time="2026-04-17T23:38:25.042944672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:25.044031 containerd[2002]: time="2026-04-17T23:38:25.043161419Z" level=info msg="RemovePodSandbox \"b03180091f5a7631e9f9a9985e9a2f3c905f447835ebd1c5b18d54bc88750b89\" returns successfully" Apr 17 23:38:28.910352 systemd[1]: Started sshd@20-172.31.30.7:22-20.229.252.112:59310.service - OpenSSH per-connection server daemon (20.229.252.112:59310). Apr 17 23:38:30.078019 sshd[6960]: Accepted publickey for core from 20.229.252.112 port 59310 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:30.084214 sshd[6960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:30.094936 systemd-logind[1963]: New session 21 of user core. Apr 17 23:38:30.101205 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:38:31.674924 sshd[6960]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:31.679994 systemd[1]: sshd@20-172.31.30.7:22-20.229.252.112:59310.service: Deactivated successfully. Apr 17 23:38:31.683578 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:38:31.684751 systemd-logind[1963]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:38:31.686231 systemd-logind[1963]: Removed session 21. Apr 17 23:38:36.607749 systemd[1]: run-containerd-runc-k8s.io-3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa-runc.ixgjSD.mount: Deactivated successfully. Apr 17 23:38:46.051102 systemd[1]: cri-containerd-584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa.scope: Deactivated successfully. Apr 17 23:38:46.051413 systemd[1]: cri-containerd-584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa.scope: Consumed 10.476s CPU time. Apr 17 23:38:46.247796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa-rootfs.mount: Deactivated successfully. Apr 17 23:38:46.300470 containerd[2002]: time="2026-04-17T23:38:46.290004327Z" level=info msg="shim disconnected" id=584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa namespace=k8s.io Apr 17 23:38:46.302012 containerd[2002]: time="2026-04-17T23:38:46.300473143Z" level=warning msg="cleaning up after shim disconnected" id=584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa namespace=k8s.io Apr 17 23:38:46.302012 containerd[2002]: time="2026-04-17T23:38:46.300500641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:46.575737 systemd[1]: cri-containerd-090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a.scope: Deactivated successfully. Apr 17 23:38:46.576574 systemd[1]: cri-containerd-090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a.scope: Consumed 4.178s CPU time, 15.6M memory peak, 0B memory swap peak. Apr 17 23:38:46.607091 containerd[2002]: time="2026-04-17T23:38:46.606948094Z" level=info msg="shim disconnected" id=090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a namespace=k8s.io Apr 17 23:38:46.607091 containerd[2002]: time="2026-04-17T23:38:46.607018044Z" level=warning msg="cleaning up after shim disconnected" id=090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a namespace=k8s.io Apr 17 23:38:46.607091 containerd[2002]: time="2026-04-17T23:38:46.607030967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:46.611572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a-rootfs.mount: Deactivated successfully. Apr 17 23:38:47.284513 kubelet[3202]: I0417 23:38:47.284454 3202 scope.go:117] "RemoveContainer" containerID="090100df0b83fb38de484aadb0aab9d41842ccaf16e0745484d7371922fbc57a" Apr 17 23:38:47.292371 kubelet[3202]: I0417 23:38:47.284989 3202 scope.go:117] "RemoveContainer" containerID="584894f5689b98becb99d67a410a18017ba794879b1b6fa9efa263d1c92fe8aa" Apr 17 23:38:47.366967 containerd[2002]: time="2026-04-17T23:38:47.366732779Z" level=info msg="CreateContainer within sandbox \"33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:38:47.376375 containerd[2002]: time="2026-04-17T23:38:47.376182172Z" level=info msg="CreateContainer within sandbox \"d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:38:47.516181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219074573.mount: Deactivated successfully. Apr 17 23:38:47.519083 containerd[2002]: time="2026-04-17T23:38:47.517948851Z" level=info msg="CreateContainer within sandbox \"d170963e6df28635e953039dde5428762e3da014258254f65c0536f624e8cbdf\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a64d37ce78f794bdb6f263a0f349bf2e5c0d00b5d5a04fb40e2c8575b042482e\"" Apr 17 23:38:47.524614 containerd[2002]: time="2026-04-17T23:38:47.524572877Z" level=info msg="StartContainer for \"a64d37ce78f794bdb6f263a0f349bf2e5c0d00b5d5a04fb40e2c8575b042482e\"" Apr 17 23:38:47.532046 containerd[2002]: time="2026-04-17T23:38:47.532001490Z" level=info msg="CreateContainer within sandbox \"33ccd265267566acc7df89204a31b4a19c00fff7f32421acf9417fd34c6d37f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bb9719edc5c2b1f6b672bd97e77281f0b866d5d9786e7d17db57b22ec35d84ef\"" Apr 17 23:38:47.533482 containerd[2002]: time="2026-04-17T23:38:47.533427110Z" level=info msg="StartContainer for \"bb9719edc5c2b1f6b672bd97e77281f0b866d5d9786e7d17db57b22ec35d84ef\"" Apr 17 23:38:47.599104 systemd[1]: Started cri-containerd-bb9719edc5c2b1f6b672bd97e77281f0b866d5d9786e7d17db57b22ec35d84ef.scope - libcontainer container bb9719edc5c2b1f6b672bd97e77281f0b866d5d9786e7d17db57b22ec35d84ef. Apr 17 23:38:47.611209 systemd[1]: Started cri-containerd-a64d37ce78f794bdb6f263a0f349bf2e5c0d00b5d5a04fb40e2c8575b042482e.scope - libcontainer container a64d37ce78f794bdb6f263a0f349bf2e5c0d00b5d5a04fb40e2c8575b042482e. Apr 17 23:38:47.720138 containerd[2002]: time="2026-04-17T23:38:47.720095022Z" level=info msg="StartContainer for \"a64d37ce78f794bdb6f263a0f349bf2e5c0d00b5d5a04fb40e2c8575b042482e\" returns successfully" Apr 17 23:38:47.738092 containerd[2002]: time="2026-04-17T23:38:47.738034083Z" level=info msg="StartContainer for \"bb9719edc5c2b1f6b672bd97e77281f0b866d5d9786e7d17db57b22ec35d84ef\" returns successfully" Apr 17 23:38:51.470213 systemd[1]: cri-containerd-5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d.scope: Deactivated successfully. Apr 17 23:38:51.470635 systemd[1]: cri-containerd-5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d.scope: Consumed 1.902s CPU time, 13.5M memory peak, 0B memory swap peak. Apr 17 23:38:51.500528 containerd[2002]: time="2026-04-17T23:38:51.500413447Z" level=info msg="shim disconnected" id=5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d namespace=k8s.io Apr 17 23:38:51.500528 containerd[2002]: time="2026-04-17T23:38:51.500522765Z" level=warning msg="cleaning up after shim disconnected" id=5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d namespace=k8s.io Apr 17 23:38:51.500528 containerd[2002]: time="2026-04-17T23:38:51.500534164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:51.509133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d-rootfs.mount: Deactivated successfully. Apr 17 23:38:52.330164 kubelet[3202]: I0417 23:38:52.329556 3202 scope.go:117] "RemoveContainer" containerID="5158a957790e1a695d02df73f5fd7652650f50ca8ff9cf90294f32bede7bc70d" Apr 17 23:38:52.333125 containerd[2002]: time="2026-04-17T23:38:52.333075065Z" level=info msg="CreateContainer within sandbox \"8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:38:52.356836 containerd[2002]: time="2026-04-17T23:38:52.356678017Z" level=info msg="CreateContainer within sandbox \"8371fde310047c39ec0cbd66317f335ff7f80ebdc50a5f1e020c4935d69a3a1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a0174483f5e60009fdc335a5376c9b098913b76712f6fc6c10cc479c763f01c3\"" Apr 17 23:38:52.357618 containerd[2002]: time="2026-04-17T23:38:52.357259783Z" level=info msg="StartContainer for \"a0174483f5e60009fdc335a5376c9b098913b76712f6fc6c10cc479c763f01c3\"" Apr 17 23:38:52.405203 systemd[1]: Started cri-containerd-a0174483f5e60009fdc335a5376c9b098913b76712f6fc6c10cc479c763f01c3.scope - libcontainer container a0174483f5e60009fdc335a5376c9b098913b76712f6fc6c10cc479c763f01c3. Apr 17 23:38:52.460366 containerd[2002]: time="2026-04-17T23:38:52.460306289Z" level=info msg="StartContainer for \"a0174483f5e60009fdc335a5376c9b098913b76712f6fc6c10cc479c763f01c3\" returns successfully" Apr 17 23:38:54.182196 kubelet[3202]: E0417 23:38:54.182135 3202 request.go:1196] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 17 23:38:54.207398 kubelet[3202]: E0417 23:38:54.207341 3202 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 17 23:38:58.589600 systemd[1]: run-containerd-runc-k8s.io-3231d736146388931c4ef29057ee29379cb2367f932bb3662f2ddcecad6978aa-runc.vgN4U3.mount: Deactivated successfully.