Apr 13 20:18:16.949328 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:18:16.949364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:18:16.949382 kernel: BIOS-provided physical RAM map: Apr 13 20:18:16.949393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:18:16.949403 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 13 20:18:16.949413 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 13 20:18:16.949427 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 13 20:18:16.949438 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 13 20:18:16.949449 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 13 20:18:16.949464 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 13 20:18:16.949475 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 13 20:18:16.949486 kernel: NX (Execute Disable) protection: active Apr 13 20:18:16.949497 kernel: APIC: Static calls initialized Apr 13 20:18:16.949508 kernel: efi: EFI v2.7 by EDK II Apr 13 20:18:16.949522 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 13 20:18:16.949538 kernel: SMBIOS 2.7 present. Apr 13 20:18:16.949550 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 13 20:18:16.949562 kernel: Hypervisor detected: KVM Apr 13 20:18:16.949575 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:18:16.949587 kernel: kvm-clock: using sched offset of 3582957665 cycles Apr 13 20:18:16.949599 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:18:16.949612 kernel: tsc: Detected 2500.004 MHz processor Apr 13 20:18:16.949625 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:18:16.949637 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:18:16.949650 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 13 20:18:16.949665 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:18:16.949677 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:18:16.949690 kernel: Using GB pages for direct mapping Apr 13 20:18:16.949701 kernel: Secure boot disabled Apr 13 20:18:16.949714 kernel: ACPI: Early table checksum verification disabled Apr 13 20:18:16.949726 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 13 20:18:16.949739 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 20:18:16.949751 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 20:18:16.949764 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 20:18:16.949779 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 13 20:18:16.949792 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 13 20:18:16.949805 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 20:18:16.949817 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 20:18:16.949829 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 13 20:18:16.949842 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 13 20:18:16.949861 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:18:16.949877 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:18:16.949890 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 13 20:18:16.949903 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 13 20:18:16.949916 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 13 20:18:16.949930 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 13 20:18:16.949943 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 13 20:18:16.949956 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 13 20:18:16.949972 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 13 20:18:16.949985 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 13 20:18:16.949998 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 13 20:18:16.950011 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 13 20:18:16.950023 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 13 20:18:16.950036 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 13 20:18:16.950049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:18:16.950063 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:18:16.950075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 13 20:18:16.950091 kernel: NUMA: Initialized distance table, cnt=1 Apr 13 20:18:16.950115 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 13 20:18:16.950129 kernel: Zone ranges: Apr 13 20:18:16.950142 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:18:16.950154 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 13 20:18:16.950167 kernel: Normal empty Apr 13 20:18:16.950180 kernel: Movable zone start for each node Apr 13 20:18:16.950193 kernel: Early memory node ranges Apr 13 20:18:16.950207 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:18:16.950219 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 13 20:18:16.950236 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 13 20:18:16.950249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 13 20:18:16.950275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:18:16.950288 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:18:16.950302 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 20:18:16.950316 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 13 20:18:16.950330 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:18:16.950345 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:18:16.950359 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 13 20:18:16.950377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:18:16.950392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:18:16.950406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:18:16.950421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:18:16.950435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:18:16.950449 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:18:16.950464 kernel: TSC deadline timer available Apr 13 20:18:16.950479 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:18:16.950492 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:18:16.950510 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 13 20:18:16.950524 kernel: Booting paravirtualized kernel on KVM Apr 13 20:18:16.950538 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:18:16.950553 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:18:16.950568 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:18:16.950582 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:18:16.950596 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:18:16.950610 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:18:16.950625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:18:16.950644 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:18:16.950659 kernel: random: crng init done Apr 13 20:18:16.950673 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:18:16.950688 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:18:16.950701 kernel: Fallback order for Node 0: 0 Apr 13 20:18:16.950716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 13 20:18:16.950730 kernel: Policy zone: DMA32 Apr 13 20:18:16.950745 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:18:16.950762 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 13 20:18:16.950777 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:18:16.950791 kernel: Kernel/User page tables isolation: enabled Apr 13 20:18:16.950805 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:18:16.950820 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:18:16.950834 kernel: Dynamic Preempt: voluntary Apr 13 20:18:16.950848 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:18:16.950863 kernel: rcu: RCU event tracing is enabled. Apr 13 20:18:16.950878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:18:16.950895 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:18:16.950910 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:18:16.950924 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:18:16.950939 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:18:16.950954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:18:16.950968 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:18:16.950982 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:18:16.951010 kernel: Console: colour dummy device 80x25 Apr 13 20:18:16.951024 kernel: printk: console [tty0] enabled Apr 13 20:18:16.951040 kernel: printk: console [ttyS0] enabled Apr 13 20:18:16.951055 kernel: ACPI: Core revision 20230628 Apr 13 20:18:16.951070 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 13 20:18:16.951090 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:18:16.951133 kernel: x2apic enabled Apr 13 20:18:16.951147 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:18:16.951161 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 13 20:18:16.951176 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Apr 13 20:18:16.951194 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 13 20:18:16.951208 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 13 20:18:16.951222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:18:16.951236 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:18:16.951249 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:18:16.951263 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 20:18:16.951278 kernel: RETBleed: Vulnerable Apr 13 20:18:16.951292 kernel: Speculative Store Bypass: Vulnerable Apr 13 20:18:16.951307 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:18:16.951322 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:18:16.951339 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 20:18:16.951354 kernel: active return thunk: its_return_thunk Apr 13 20:18:16.951369 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:18:16.951384 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:18:16.951401 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:18:16.951416 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:18:16.951432 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 13 20:18:16.951448 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 13 20:18:16.951464 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:18:16.951480 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:18:16.951496 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:18:16.951514 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:18:16.951530 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:18:16.951546 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 13 20:18:16.951561 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 13 20:18:16.951577 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 13 20:18:16.951592 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 13 20:18:16.951608 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 13 20:18:16.951624 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 13 20:18:16.951650 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 13 20:18:16.951666 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:18:16.951681 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:18:16.951697 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:18:16.951716 kernel: landlock: Up and running. Apr 13 20:18:16.951732 kernel: SELinux: Initializing. Apr 13 20:18:16.951749 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:18:16.951765 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:18:16.951781 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 13 20:18:16.951798 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:18:16.951814 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:18:16.951831 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:18:16.951847 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 13 20:18:16.951863 kernel: signal: max sigframe size: 3632 Apr 13 20:18:16.951881 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:18:16.951897 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:18:16.951913 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:18:16.951928 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:18:16.951944 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:18:16.951959 kernel: .... node #0, CPUs: #1 Apr 13 20:18:16.951975 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:18:16.951991 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:18:16.952009 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:18:16.952024 kernel: smpboot: Max logical packages: 1 Apr 13 20:18:16.952039 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Apr 13 20:18:16.952055 kernel: devtmpfs: initialized Apr 13 20:18:16.952070 kernel: x86/mm: Memory block size: 128MB Apr 13 20:18:16.952085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 13 20:18:16.952120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:18:16.952136 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:18:16.952151 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:18:16.952170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:18:16.952185 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:18:16.952200 kernel: audit: type=2000 audit(1776111497.111:1): state=initialized audit_enabled=0 res=1 Apr 13 20:18:16.952215 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:18:16.952231 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:18:16.952246 kernel: cpuidle: using governor menu Apr 13 20:18:16.952261 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:18:16.952276 kernel: dca service started, version 1.12.1 Apr 13 20:18:16.952292 kernel: PCI: Using configuration type 1 for base access Apr 13 20:18:16.952310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:18:16.952325 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:18:16.952340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:18:16.952355 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:18:16.952371 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:18:16.952387 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:18:16.952402 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:18:16.952417 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:18:16.952432 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:18:16.952450 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:18:16.952465 kernel: ACPI: Interpreter enabled Apr 13 20:18:16.952480 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:18:16.952495 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:18:16.952510 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:18:16.952525 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:18:16.952540 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 13 20:18:16.952555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:18:16.952776 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:18:16.952916 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:18:16.953043 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:18:16.953061 kernel: acpiphp: Slot [3] registered Apr 13 20:18:16.953077 kernel: acpiphp: Slot [4] registered Apr 13 20:18:16.953092 kernel: acpiphp: Slot [5] registered Apr 13 20:18:16.953119 kernel: acpiphp: Slot [6] registered Apr 13 20:18:16.953134 kernel: acpiphp: Slot [7] registered Apr 13 20:18:16.953153 kernel: acpiphp: Slot [8] registered Apr 13 20:18:16.953168 kernel: acpiphp: Slot [9] registered Apr 13 20:18:16.953183 kernel: acpiphp: Slot [10] registered Apr 13 20:18:16.953198 kernel: acpiphp: Slot [11] registered Apr 13 20:18:16.953213 kernel: acpiphp: Slot [12] registered Apr 13 20:18:16.953225 kernel: acpiphp: Slot [13] registered Apr 13 20:18:16.953237 kernel: acpiphp: Slot [14] registered Apr 13 20:18:16.953250 kernel: acpiphp: Slot [15] registered Apr 13 20:18:16.953264 kernel: acpiphp: Slot [16] registered Apr 13 20:18:16.953277 kernel: acpiphp: Slot [17] registered Apr 13 20:18:16.953295 kernel: acpiphp: Slot [18] registered Apr 13 20:18:16.953311 kernel: acpiphp: Slot [19] registered Apr 13 20:18:16.953326 kernel: acpiphp: Slot [20] registered Apr 13 20:18:16.953342 kernel: acpiphp: Slot [21] registered Apr 13 20:18:16.953357 kernel: acpiphp: Slot [22] registered Apr 13 20:18:16.953373 kernel: acpiphp: Slot [23] registered Apr 13 20:18:16.953388 kernel: acpiphp: Slot [24] registered Apr 13 20:18:16.953404 kernel: acpiphp: Slot [25] registered Apr 13 20:18:16.953419 kernel: acpiphp: Slot [26] registered Apr 13 20:18:16.953438 kernel: acpiphp: Slot [27] registered Apr 13 20:18:16.953453 kernel: acpiphp: Slot [28] registered Apr 13 20:18:16.953469 kernel: acpiphp: Slot [29] registered Apr 13 20:18:16.953484 kernel: acpiphp: Slot [30] registered Apr 13 20:18:16.953500 kernel: acpiphp: Slot [31] registered Apr 13 20:18:16.953515 kernel: PCI host bridge to bus 0000:00 Apr 13 20:18:16.953683 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:18:16.953817 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:18:16.953942 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:18:16.954080 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 13 20:18:16.954245 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 13 20:18:16.954365 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:18:16.954519 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:18:16.954661 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 13 20:18:16.954810 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 13 20:18:16.954951 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:18:16.955086 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 13 20:18:16.955242 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 13 20:18:16.955381 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 13 20:18:16.955514 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 13 20:18:16.955659 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 13 20:18:16.955793 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 13 20:18:16.955939 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 13 20:18:16.956072 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 13 20:18:16.957132 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:18:16.957284 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 13 20:18:16.957418 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:18:16.957562 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 20:18:16.957701 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 13 20:18:16.957842 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 20:18:16.957972 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 13 20:18:16.957990 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:18:16.958005 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:18:16.958020 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:18:16.958035 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:18:16.958050 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:18:16.958068 kernel: iommu: Default domain type: Translated Apr 13 20:18:16.958083 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:18:16.958114 kernel: efivars: Registered efivars operations Apr 13 20:18:16.958129 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:18:16.958143 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:18:16.958158 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 13 20:18:16.958172 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 13 20:18:16.958305 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 13 20:18:16.958437 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 13 20:18:16.958570 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:18:16.958588 kernel: vgaarb: loaded Apr 13 20:18:16.958603 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 13 20:18:16.958617 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 13 20:18:16.958631 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:18:16.958646 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:18:16.958660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:18:16.958674 kernel: pnp: PnP ACPI init Apr 13 20:18:16.958689 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:18:16.961211 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:18:16.961231 kernel: NET: Registered PF_INET protocol family Apr 13 20:18:16.961248 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:18:16.961264 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 13 20:18:16.961280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:18:16.961296 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:18:16.961313 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 13 20:18:16.961329 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 13 20:18:16.961351 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:18:16.961366 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:18:16.961382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:18:16.961398 kernel: NET: Registered PF_XDP protocol family Apr 13 20:18:16.961566 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:18:16.961699 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:18:16.961823 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:18:16.961944 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 13 20:18:16.962060 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 13 20:18:16.964354 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:18:16.964386 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:18:16.964403 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:18:16.964420 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 13 20:18:16.964436 kernel: clocksource: Switched to clocksource tsc Apr 13 20:18:16.964452 kernel: Initialise system trusted keyrings Apr 13 20:18:16.964468 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 13 20:18:16.964484 kernel: Key type asymmetric registered Apr 13 20:18:16.964505 kernel: Asymmetric key parser 'x509' registered Apr 13 20:18:16.964520 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:18:16.964536 kernel: io scheduler mq-deadline registered Apr 13 20:18:16.964551 kernel: io scheduler kyber registered Apr 13 20:18:16.964567 kernel: io scheduler bfq registered Apr 13 20:18:16.964583 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:18:16.964599 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:18:16.964615 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:18:16.964631 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:18:16.964650 kernel: i8042: Warning: Keylock active Apr 13 20:18:16.964665 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:18:16.964681 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:18:16.964825 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:18:16.964950 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:18:16.965073 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:18:16 UTC (1776111496) Apr 13 20:18:16.965285 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:18:16.965308 kernel: intel_pstate: CPU model not supported Apr 13 20:18:16.965332 kernel: efifb: probing for efifb Apr 13 20:18:16.965348 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 13 20:18:16.965363 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 13 20:18:16.965380 kernel: efifb: scrolling: redraw Apr 13 20:18:16.965396 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:18:16.965413 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 20:18:16.965430 kernel: fb0: EFI VGA frame buffer device Apr 13 20:18:16.965446 kernel: pstore: Using crash dump compression: deflate Apr 13 20:18:16.965463 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:18:16.965483 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:18:16.965500 kernel: Segment Routing with IPv6 Apr 13 20:18:16.965516 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:18:16.965533 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:18:16.965550 kernel: Key type dns_resolver registered Apr 13 20:18:16.965567 kernel: IPI shorthand broadcast: enabled Apr 13 20:18:16.965610 kernel: sched_clock: Marking stable (511001978, 163941054)->(753973151, -79030119) Apr 13 20:18:16.965630 kernel: registered taskstats version 1 Apr 13 20:18:16.965647 kernel: Loading compiled-in X.509 certificates Apr 13 20:18:16.965668 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:18:16.965685 kernel: Key type .fscrypt registered Apr 13 20:18:16.965702 kernel: Key type fscrypt-provisioning registered Apr 13 20:18:16.965719 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:18:16.965736 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:18:16.965754 kernel: ima: No architecture policies found Apr 13 20:18:16.965771 kernel: clk: Disabling unused clocks Apr 13 20:18:16.965789 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:18:16.965807 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:18:16.965827 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:18:16.965845 kernel: Run /init as init process Apr 13 20:18:16.965863 kernel: with arguments: Apr 13 20:18:16.965880 kernel: /init Apr 13 20:18:16.965897 kernel: with environment: Apr 13 20:18:16.965914 kernel: HOME=/ Apr 13 20:18:16.965931 kernel: TERM=linux Apr 13 20:18:16.965952 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:18:16.965973 systemd[1]: Detected virtualization amazon. Apr 13 20:18:16.965995 systemd[1]: Detected architecture x86-64. Apr 13 20:18:16.966012 systemd[1]: Running in initrd. Apr 13 20:18:16.966030 systemd[1]: No hostname configured, using default hostname. Apr 13 20:18:16.966048 systemd[1]: Hostname set to . Apr 13 20:18:16.966066 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:18:16.966084 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:18:16.967968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:18:16.967997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:18:16.968015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:18:16.968033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:18:16.968052 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:18:16.968074 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:18:16.968124 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:18:16.968145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:18:16.968162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:18:16.968181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:18:16.968199 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:18:16.968218 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:18:16.968236 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:18:16.968255 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:18:16.968277 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:18:16.968295 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:18:16.968314 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:18:16.968332 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:18:16.968350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:18:16.968369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:18:16.968387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:18:16.968405 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:18:16.968427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:18:16.968446 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:18:16.968464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:18:16.968482 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:18:16.968500 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:18:16.968518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:18:16.968536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:16.968586 systemd-journald[179]: Collecting audit messages is disabled. Apr 13 20:18:16.968631 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:18:16.968649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:18:16.968668 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:18:16.968691 systemd-journald[179]: Journal started Apr 13 20:18:16.968728 systemd-journald[179]: Runtime Journal (/run/log/journal/ec211ee75c61c1d474e598911d623700) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:18:16.959196 systemd-modules-load[180]: Inserted module 'overlay' Apr 13 20:18:16.979153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:18:16.984185 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:18:17.003326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:18:17.003399 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:18:17.006139 kernel: Bridge firewalling registered Apr 13 20:18:17.007291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:17.007957 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 13 20:18:17.010718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:18:17.019316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:18:17.022383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:18:17.025664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:18:17.026602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:18:17.040197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:18:17.050396 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:18:17.057855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:18:17.061901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:18:17.066356 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:18:17.072309 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:18:17.076604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:18:17.089761 dracut-cmdline[213]: dracut-dracut-053 Apr 13 20:18:17.094350 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:18:17.121454 systemd-resolved[214]: Positive Trust Anchors: Apr 13 20:18:17.121471 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:18:17.121536 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:18:17.129844 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 13 20:18:17.133171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:18:17.133874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:18:17.185142 kernel: SCSI subsystem initialized Apr 13 20:18:17.196135 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:18:17.208137 kernel: iscsi: registered transport (tcp) Apr 13 20:18:17.229276 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:18:17.229361 kernel: QLogic iSCSI HBA Driver Apr 13 20:18:17.269470 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:18:17.275294 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:18:17.302378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:18:17.302458 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:18:17.302481 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:18:17.347143 kernel: raid6: avx512x4 gen() 17988 MB/s Apr 13 20:18:17.365128 kernel: raid6: avx512x2 gen() 18058 MB/s Apr 13 20:18:17.383128 kernel: raid6: avx512x1 gen() 18016 MB/s Apr 13 20:18:17.401127 kernel: raid6: avx2x4 gen() 18001 MB/s Apr 13 20:18:17.419132 kernel: raid6: avx2x2 gen() 17895 MB/s Apr 13 20:18:17.437979 kernel: raid6: avx2x1 gen() 13570 MB/s Apr 13 20:18:17.438051 kernel: raid6: using algorithm avx512x2 gen() 18058 MB/s Apr 13 20:18:17.456716 kernel: raid6: .... xor() 24678 MB/s, rmw enabled Apr 13 20:18:17.456785 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:18:17.479142 kernel: xor: automatically using best checksumming function avx Apr 13 20:18:17.641133 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:18:17.651657 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:18:17.656391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:18:17.679171 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 13 20:18:17.684512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:18:17.692423 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:18:17.713733 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:18:17.745358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:18:17.748376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:18:17.803201 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:18:17.813439 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:18:17.835419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:18:17.839432 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:18:17.841448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:18:17.841952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:18:17.848320 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:18:17.873015 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:18:17.906423 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:18:17.916611 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 20:18:17.916895 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 20:18:17.931146 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 13 20:18:17.937125 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:4c:48:27:26:f3 Apr 13 20:18:17.945822 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:18:17.945883 kernel: AES CTR mode by8 optimization enabled Apr 13 20:18:17.946203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:18:17.947921 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:18:17.950506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:18:17.951625 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:18:17.952235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:18:17.952492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:17.953081 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:17.959546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:17.972365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:18:17.972500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:17.984396 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 20:18:17.984675 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 13 20:18:17.987424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:17.999342 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 20:18:18.005050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:18.020741 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:18:18.020777 kernel: GPT:9289727 != 33554431 Apr 13 20:18:18.020805 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:18:18.020822 kernel: GPT:9289727 != 33554431 Apr 13 20:18:18.020838 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:18:18.020855 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:18:18.025306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:18:18.044552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:18:18.084400 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (448) Apr 13 20:18:18.095185 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (449) Apr 13 20:18:18.174355 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 20:18:18.181384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 20:18:18.192046 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 20:18:18.192643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 20:18:18.200000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:18:18.206293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:18:18.212636 disk-uuid[632]: Primary Header is updated. Apr 13 20:18:18.212636 disk-uuid[632]: Secondary Entries is updated. Apr 13 20:18:18.212636 disk-uuid[632]: Secondary Header is updated. Apr 13 20:18:18.218139 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:18:18.223122 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:18:18.228187 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:18:19.237303 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:18:19.237373 disk-uuid[633]: The operation has completed successfully. Apr 13 20:18:19.383328 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:18:19.383454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:18:19.409379 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:18:19.414654 sh[976]: Success Apr 13 20:18:19.430132 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:18:19.541946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:18:19.557296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:18:19.559190 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:18:19.596265 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:18:19.596341 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:18:19.599501 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:18:19.599565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:18:19.600913 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:18:19.679136 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:18:19.694353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:18:19.695784 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:18:19.707331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:18:19.711085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:18:19.738233 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:18:19.738316 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:18:19.738342 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:18:19.761161 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:18:19.775070 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:18:19.778170 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:18:19.785400 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:18:19.792453 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:18:19.818274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:18:19.828316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:18:19.849133 systemd-networkd[1168]: lo: Link UP Apr 13 20:18:19.849143 systemd-networkd[1168]: lo: Gained carrier Apr 13 20:18:19.850951 systemd-networkd[1168]: Enumeration completed Apr 13 20:18:19.851440 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:18:19.851795 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:18:19.851801 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:18:19.852497 systemd[1]: Reached target network.target - Network. Apr 13 20:18:19.854778 systemd-networkd[1168]: eth0: Link UP Apr 13 20:18:19.854783 systemd-networkd[1168]: eth0: Gained carrier Apr 13 20:18:19.854797 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:18:19.873220 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.17.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:18:20.059602 ignition[1133]: Ignition 2.19.0 Apr 13 20:18:20.059616 ignition[1133]: Stage: fetch-offline Apr 13 20:18:20.060008 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.060021 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.060655 ignition[1133]: Ignition finished successfully Apr 13 20:18:20.062820 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:18:20.069336 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:18:20.083604 ignition[1177]: Ignition 2.19.0 Apr 13 20:18:20.083618 ignition[1177]: Stage: fetch Apr 13 20:18:20.084177 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.084191 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.084317 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.095502 ignition[1177]: PUT result: OK Apr 13 20:18:20.097255 ignition[1177]: parsed url from cmdline: "" Apr 13 20:18:20.097262 ignition[1177]: no config URL provided Apr 13 20:18:20.097272 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:18:20.097287 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:18:20.097310 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.097808 ignition[1177]: PUT result: OK Apr 13 20:18:20.097857 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 20:18:20.098483 ignition[1177]: GET result: OK Apr 13 20:18:20.098646 ignition[1177]: parsing config with SHA512: fe3f8d904520b178073df3095d4ffea11243074dff02dac6d0d6de8529f51e1466460e318557896341f9ff629226ea172f03b0d69d2220f5146fed2fe9af8731 Apr 13 20:18:20.103911 unknown[1177]: fetched base config from "system" Apr 13 20:18:20.103926 unknown[1177]: fetched base config from "system" Apr 13 20:18:20.105033 ignition[1177]: fetch: fetch complete Apr 13 20:18:20.103935 unknown[1177]: fetched user config from "aws" Apr 13 20:18:20.105047 ignition[1177]: fetch: fetch passed Apr 13 20:18:20.105134 ignition[1177]: Ignition finished successfully Apr 13 20:18:20.108559 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:18:20.116331 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:18:20.131903 ignition[1183]: Ignition 2.19.0 Apr 13 20:18:20.131916 ignition[1183]: Stage: kargs Apr 13 20:18:20.132386 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.132400 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.132522 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.133321 ignition[1183]: PUT result: OK Apr 13 20:18:20.135901 ignition[1183]: kargs: kargs passed Apr 13 20:18:20.135985 ignition[1183]: Ignition finished successfully Apr 13 20:18:20.137367 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:18:20.143303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:18:20.157842 ignition[1189]: Ignition 2.19.0 Apr 13 20:18:20.157856 ignition[1189]: Stage: disks Apr 13 20:18:20.158325 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.158340 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.158476 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.159842 ignition[1189]: PUT result: OK Apr 13 20:18:20.162681 ignition[1189]: disks: disks passed Apr 13 20:18:20.162758 ignition[1189]: Ignition finished successfully Apr 13 20:18:20.164331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:18:20.165322 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:18:20.165926 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:18:20.166288 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:18:20.166834 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:18:20.167379 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:18:20.171284 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:18:20.195366 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:18:20.198767 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:18:20.205255 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:18:20.309121 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:18:20.310180 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:18:20.311293 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:18:20.324276 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:18:20.328225 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:18:20.329729 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:18:20.331006 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:18:20.331048 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:18:20.345138 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 13 20:18:20.348125 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:18:20.352266 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:18:20.352328 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:18:20.352277 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:18:20.363128 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:18:20.363364 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:18:20.368339 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:18:20.568369 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:18:20.574113 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:18:20.579225 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:18:20.583935 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:18:20.735309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:18:20.741222 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:18:20.744341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:18:20.753328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:18:20.755826 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:18:20.792812 ignition[1333]: INFO : Ignition 2.19.0 Apr 13 20:18:20.792812 ignition[1333]: INFO : Stage: mount Apr 13 20:18:20.795183 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.795183 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.795183 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.795183 ignition[1333]: INFO : PUT result: OK Apr 13 20:18:20.799088 ignition[1333]: INFO : mount: mount passed Apr 13 20:18:20.799631 ignition[1333]: INFO : Ignition finished successfully Apr 13 20:18:20.802071 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:18:20.803039 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:18:20.810341 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:18:20.828396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:18:20.845573 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Apr 13 20:18:20.845642 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:18:20.847379 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:18:20.849947 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:18:20.855124 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:18:20.857053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:18:20.879816 ignition[1362]: INFO : Ignition 2.19.0 Apr 13 20:18:20.879816 ignition[1362]: INFO : Stage: files Apr 13 20:18:20.881301 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:20.881301 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:20.881301 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:20.882859 ignition[1362]: INFO : PUT result: OK Apr 13 20:18:20.884336 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:18:20.885380 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:18:20.885380 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:18:20.910959 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:18:20.912257 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:18:20.912257 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:18:20.912187 unknown[1362]: wrote ssh authorized keys file for user: core Apr 13 20:18:20.914707 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:18:20.915612 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:18:20.915612 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:18:20.915612 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:18:20.995924 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:18:21.144375 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:18:21.144375 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:18:21.147159 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:18:21.584220 systemd-networkd[1168]: eth0: Gained IPv6LL Apr 13 20:18:21.612019 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 20:18:21.962985 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:18:21.962985 ignition[1362]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 20:18:21.965667 ignition[1362]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:18:21.967325 ignition[1362]: INFO : files: files passed Apr 13 20:18:21.967325 ignition[1362]: INFO : Ignition finished successfully Apr 13 20:18:21.968720 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:18:21.976344 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:18:21.988450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:18:21.990852 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:18:21.990986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:18:22.010608 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:18:22.010608 initrd-setup-root-after-ignition[1390]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:18:22.014781 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:18:22.016094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:18:22.017409 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:18:22.023320 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:18:22.049236 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:18:22.049374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:18:22.050563 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:18:22.051829 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:18:22.052649 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:18:22.058313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:18:22.072200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:18:22.077365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:18:22.090809 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:18:22.091558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:18:22.092644 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:18:22.093532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:18:22.093707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:18:22.094865 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:18:22.095782 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:18:22.096605 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:18:22.097380 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:18:22.098146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:18:22.098909 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:18:22.099744 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:18:22.100567 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:18:22.101724 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:18:22.102495 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:18:22.103220 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:18:22.103399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:18:22.104609 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:18:22.105406 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:18:22.106078 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:18:22.106251 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:18:22.106896 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:18:22.107066 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:18:22.108534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:18:22.108713 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:18:22.109447 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:18:22.109598 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:18:22.116367 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:18:22.117903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:18:22.118183 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:18:22.122362 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:18:22.125176 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:18:22.126022 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:18:22.126787 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:18:22.126942 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:18:22.137933 ignition[1414]: INFO : Ignition 2.19.0 Apr 13 20:18:22.137933 ignition[1414]: INFO : Stage: umount Apr 13 20:18:22.136754 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:18:22.139975 ignition[1414]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:18:22.139975 ignition[1414]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:18:22.139975 ignition[1414]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:18:22.139975 ignition[1414]: INFO : PUT result: OK Apr 13 20:18:22.136894 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:18:22.144309 ignition[1414]: INFO : umount: umount passed Apr 13 20:18:22.144309 ignition[1414]: INFO : Ignition finished successfully Apr 13 20:18:22.153468 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:18:22.153639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:18:22.154994 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:18:22.155447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:18:22.156819 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:18:22.156884 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:18:22.157525 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:18:22.157588 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:18:22.158260 systemd[1]: Stopped target network.target - Network. Apr 13 20:18:22.159009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:18:22.159074 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:18:22.160689 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:18:22.161737 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:18:22.163205 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:18:22.164799 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:18:22.165451 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:18:22.166237 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:18:22.166298 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:18:22.166781 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:18:22.166839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:18:22.167304 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:18:22.167363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:18:22.167979 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:18:22.168033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:18:22.172432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:18:22.173007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:18:22.177157 systemd-networkd[1168]: eth0: DHCPv6 lease lost Apr 13 20:18:22.178151 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:18:22.178869 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:18:22.179002 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:18:22.180616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:18:22.180692 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:18:22.188259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:18:22.189368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:18:22.189461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:18:22.190754 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:18:22.191581 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:18:22.192349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:18:22.205693 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:18:22.206594 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:18:22.207592 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:18:22.207732 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:18:22.209501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:18:22.209562 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:18:22.210723 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:18:22.210907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:18:22.212078 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:18:22.212220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:18:22.214094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:18:22.214203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:18:22.214996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:18:22.215040 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:18:22.215782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:18:22.215841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:18:22.216939 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:18:22.216997 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:18:22.218060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:18:22.218133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:18:22.226317 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:18:22.228068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:18:22.228754 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:18:22.230655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:18:22.230733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:22.235241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:18:22.235386 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:18:22.318193 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:18:22.318337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:18:22.320403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:18:22.320876 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:18:22.320947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:18:22.326304 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:18:22.348231 systemd[1]: Switching root. Apr 13 20:18:22.379177 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 13 20:18:22.379257 systemd-journald[179]: Journal stopped Apr 13 20:18:23.862316 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:18:23.862404 kernel: SELinux: policy capability open_perms=1 Apr 13 20:18:23.862424 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:18:23.862443 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:18:23.862462 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:18:23.862480 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:18:23.862499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:18:23.862525 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:18:23.862544 kernel: audit: type=1403 audit(1776111502.779:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:18:23.862570 systemd[1]: Successfully loaded SELinux policy in 51.604ms. Apr 13 20:18:23.862608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.595ms. Apr 13 20:18:23.862633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:18:23.862655 systemd[1]: Detected virtualization amazon. Apr 13 20:18:23.862677 systemd[1]: Detected architecture x86-64. Apr 13 20:18:23.862699 systemd[1]: Detected first boot. Apr 13 20:18:23.862721 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:18:23.862743 zram_generator::config[1474]: No configuration found. Apr 13 20:18:23.862778 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:18:23.862800 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:18:23.862822 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 20:18:23.862846 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:18:23.862869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:18:23.862890 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:18:23.862913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:18:23.862935 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:18:23.862957 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:18:23.862983 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:18:23.863005 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:18:23.863027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:18:23.863049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:18:23.863071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:18:23.863094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:18:23.863139 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:18:23.863158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:18:23.863180 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:18:23.863199 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:18:23.863218 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:18:23.863238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:18:23.863258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:18:23.863277 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:18:23.863296 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:18:23.863316 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:18:23.863340 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:18:23.863360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:18:23.863380 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:18:23.863400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:18:23.863420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:18:23.863440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:18:23.863461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:18:23.863482 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:18:23.863509 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:18:23.863532 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:18:23.863558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:23.863580 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:18:23.863600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:18:23.863618 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:18:23.863635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:18:23.863665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:18:23.863684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:18:23.863703 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:18:23.863725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:18:23.863742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:18:23.863760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:18:23.863780 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:18:23.863798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:18:23.863818 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:18:23.863839 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 20:18:23.863861 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 20:18:23.863884 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:18:23.863906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:18:23.863929 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:18:23.863989 systemd-journald[1573]: Collecting audit messages is disabled. Apr 13 20:18:23.864033 systemd-journald[1573]: Journal started Apr 13 20:18:23.864073 systemd-journald[1573]: Runtime Journal (/run/log/journal/ec211ee75c61c1d474e598911d623700) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:18:23.880136 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:18:23.885136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:18:23.895081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:23.906324 kernel: fuse: init (API version 7.39) Apr 13 20:18:23.906404 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:18:23.927405 kernel: loop: module loaded Apr 13 20:18:23.926678 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:18:23.928909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:18:23.931299 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:18:23.934317 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:18:23.935157 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:18:23.937379 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:18:23.940158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:18:23.941998 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:18:23.942815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:18:23.946763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:18:23.947048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:18:23.948172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:18:23.948435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:18:23.949693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:18:23.951729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:18:23.954741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:18:23.970134 kernel: ACPI: bus type drm_connector registered Apr 13 20:18:23.965911 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:18:23.974032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:18:23.975166 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:18:23.978611 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:18:23.980725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:18:23.982034 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:18:23.983481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:18:23.998635 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:18:24.005277 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:18:24.014249 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:18:24.016331 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:18:24.023265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:18:24.044407 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:18:24.047234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:18:24.054274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:18:24.054896 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:18:24.058744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:18:24.065257 systemd-journald[1573]: Time spent on flushing to /var/log/journal/ec211ee75c61c1d474e598911d623700 is 90.785ms for 967 entries. Apr 13 20:18:24.065257 systemd-journald[1573]: System Journal (/var/log/journal/ec211ee75c61c1d474e598911d623700) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:18:24.170265 systemd-journald[1573]: Received client request to flush runtime journal. Apr 13 20:18:24.072341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:18:24.087820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:18:24.090491 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:18:24.092336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:18:24.110316 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:18:24.122588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:18:24.123392 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:18:24.154363 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:18:24.179935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:18:24.188532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:18:24.202790 systemd-tmpfiles[1627]: ACLs are not supported, ignoring. Apr 13 20:18:24.202819 systemd-tmpfiles[1627]: ACLs are not supported, ignoring. Apr 13 20:18:24.212605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:18:24.219383 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:18:24.262348 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:18:24.274407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:18:24.298795 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 13 20:18:24.299244 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 13 20:18:24.305947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:18:24.749223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:18:24.757321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:18:24.784386 systemd-udevd[1654]: Using default interface naming scheme 'v255'. Apr 13 20:18:24.833706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:18:24.842386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:18:24.872530 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:18:24.930710 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 20:18:24.931992 (udev-worker)[1655]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:18:24.971274 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:18:25.051131 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:18:25.086144 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:18:25.086208 systemd-networkd[1657]: lo: Link UP Apr 13 20:18:25.086214 systemd-networkd[1657]: lo: Gained carrier Apr 13 20:18:25.088061 systemd-networkd[1657]: Enumeration completed Apr 13 20:18:25.088246 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:18:25.089344 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:18:25.089357 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:18:25.092321 systemd-networkd[1657]: eth0: Link UP Apr 13 20:18:25.092858 systemd-networkd[1657]: eth0: Gained carrier Apr 13 20:18:25.094336 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:18:25.097829 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:18:25.109133 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:18:25.112317 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:18:25.112396 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 13 20:18:25.116357 systemd-networkd[1657]: eth0: DHCPv4 address 172.31.17.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:18:25.118560 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:18:25.134237 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:18:25.155157 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:18:25.176150 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1656) Apr 13 20:18:25.175590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:25.190401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:18:25.190801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:25.202477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:18:25.326442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:18:25.357559 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:18:25.369976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:18:25.377390 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:18:25.390837 lvm[1781]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:18:25.417626 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:18:25.418635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:18:25.429483 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:18:25.434847 lvm[1784]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:18:25.462549 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:18:25.464358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:18:25.465066 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:18:25.465128 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:18:25.465766 systemd[1]: Reached target machines.target - Containers. Apr 13 20:18:25.468007 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:18:25.473340 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:18:25.476445 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:18:25.478847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:18:25.485391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:18:25.491446 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:18:25.498348 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:18:25.501446 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:18:25.510584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:18:25.541133 kernel: loop0: detected capacity change from 0 to 140768 Apr 13 20:18:25.549921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:18:25.551265 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:18:25.614131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:18:25.638126 kernel: loop1: detected capacity change from 0 to 61336 Apr 13 20:18:25.749347 kernel: loop2: detected capacity change from 0 to 142488 Apr 13 20:18:25.825534 kernel: loop3: detected capacity change from 0 to 228704 Apr 13 20:18:25.875124 kernel: loop4: detected capacity change from 0 to 140768 Apr 13 20:18:25.895128 kernel: loop5: detected capacity change from 0 to 61336 Apr 13 20:18:25.908270 kernel: loop6: detected capacity change from 0 to 142488 Apr 13 20:18:25.930135 kernel: loop7: detected capacity change from 0 to 228704 Apr 13 20:18:25.957149 (sd-merge)[1805]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 20:18:25.957848 (sd-merge)[1805]: Merged extensions into '/usr'. Apr 13 20:18:25.962782 systemd[1]: Reloading requested from client PID 1792 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:18:25.962800 systemd[1]: Reloading... Apr 13 20:18:26.051183 zram_generator::config[1833]: No configuration found. Apr 13 20:18:26.233771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:18:26.322417 systemd[1]: Reloading finished in 358 ms. Apr 13 20:18:26.336671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:18:26.347423 systemd[1]: Starting ensure-sysext.service... Apr 13 20:18:26.352988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:18:26.373227 systemd[1]: Reloading requested from client PID 1890 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:18:26.373246 systemd[1]: Reloading... Apr 13 20:18:26.395348 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:18:26.395934 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:18:26.397830 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:18:26.398864 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Apr 13 20:18:26.399055 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Apr 13 20:18:26.405565 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:18:26.405740 systemd-tmpfiles[1891]: Skipping /boot Apr 13 20:18:26.426734 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:18:26.428278 systemd-tmpfiles[1891]: Skipping /boot Apr 13 20:18:26.464133 zram_generator::config[1918]: No configuration found. Apr 13 20:18:26.603123 ldconfig[1788]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:18:26.655175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:18:26.729533 systemd[1]: Reloading finished in 355 ms. Apr 13 20:18:26.747606 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:18:26.752729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:18:26.770287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:18:26.774294 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:18:26.785394 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:18:26.797282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:18:26.805648 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:18:26.820218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:26.820549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:18:26.830450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:18:26.834621 systemd-networkd[1657]: eth0: Gained IPv6LL Apr 13 20:18:26.840552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:18:26.844834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:18:26.846276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:18:26.846501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:26.848985 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:18:26.863999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:18:26.864283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:18:26.870112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:18:26.870343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:18:26.877638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:18:26.887821 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:18:26.888330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:18:26.894468 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:26.895025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:18:26.908809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:18:26.924305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:18:26.938857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:18:26.940680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:18:26.940787 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:18:26.943255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:18:26.944178 systemd[1]: Finished ensure-sysext.service. Apr 13 20:18:26.947917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:18:26.949739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:18:26.957657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:18:26.957906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:18:26.965675 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:18:26.965949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:18:26.979334 augenrules[2027]: No rules Apr 13 20:18:26.981615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:18:26.982509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:18:26.989725 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:18:26.996855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:18:26.996972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:18:27.008138 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:18:27.009540 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:18:27.013496 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:18:27.026946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:18:27.036342 systemd-resolved[1989]: Positive Trust Anchors: Apr 13 20:18:27.036364 systemd-resolved[1989]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:18:27.036412 systemd-resolved[1989]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:18:27.042641 systemd-resolved[1989]: Defaulting to hostname 'linux'. Apr 13 20:18:27.044818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:18:27.045416 systemd[1]: Reached target network.target - Network. Apr 13 20:18:27.045854 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:18:27.046289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:18:27.046655 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:18:27.047158 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:18:27.047564 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:18:27.048262 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:18:27.048767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:18:27.049182 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:18:27.049546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:18:27.049586 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:18:27.049943 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:18:27.050845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:18:27.052825 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:18:27.054874 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:18:27.057182 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:18:27.057765 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:18:27.058292 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:18:27.059018 systemd[1]: System is tainted: cgroupsv1 Apr 13 20:18:27.059078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:18:27.059121 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:18:27.063267 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:18:27.066415 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:18:27.078430 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:18:27.082316 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:18:27.106265 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:18:27.106950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:18:27.121323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:18:27.135758 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:18:27.147131 jq[2053]: false Apr 13 20:18:27.149673 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:18:27.164188 extend-filesystems[2054]: Found loop4 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found loop5 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found loop6 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found loop7 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p1 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p2 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p3 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found usr Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p4 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p6 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p7 Apr 13 20:18:27.164188 extend-filesystems[2054]: Found nvme0n1p9 Apr 13 20:18:27.164188 extend-filesystems[2054]: Checking size of /dev/nvme0n1p9 Apr 13 20:18:27.168319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:18:27.183298 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:18:27.198029 dbus-daemon[2051]: [system] SELinux support is enabled Apr 13 20:18:27.198330 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:18:27.217617 dbus-daemon[2051]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1657 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:18:27.217881 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:18:27.230367 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:18:27.250600 coreos-metadata[2050]: Apr 13 20:18:27.247 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:18:27.250600 coreos-metadata[2050]: Apr 13 20:18:27.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 20:18:27.262463 coreos-metadata[2050]: Apr 13 20:18:27.256 INFO Fetch successful Apr 13 20:18:27.262463 coreos-metadata[2050]: Apr 13 20:18:27.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 20:18:27.262463 coreos-metadata[2050]: Apr 13 20:18:27.261 INFO Fetch successful Apr 13 20:18:27.262463 coreos-metadata[2050]: Apr 13 20:18:27.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 20:18:27.262463 coreos-metadata[2050]: Apr 13 20:18:27.262 INFO Fetch successful Apr 13 20:18:27.262658 extend-filesystems[2054]: Resized partition /dev/nvme0n1p9 Apr 13 20:18:27.264456 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:18:27.271198 extend-filesystems[2084]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.265 INFO Fetch successful Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.265 INFO Fetch failed with 404: resource not found Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.266 INFO Fetch successful Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.266 INFO Fetch successful Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.268 INFO Fetch successful Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.269 INFO Fetch successful Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.269 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 20:18:27.277828 coreos-metadata[2050]: Apr 13 20:18:27.269 INFO Fetch successful Apr 13 20:18:27.267219 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:18:27.275486 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:18:27.283122 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 20:18:27.290826 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:18:27.304632 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:18:27.316412 ntpd[2061]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:18:27.320743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: ---------------------------------------------------- Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: corporation. Support and training for ntp-4 are Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: available at https://www.nwtime.org/support Apr 13 20:18:27.321592 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: ---------------------------------------------------- Apr 13 20:18:27.316452 ntpd[2061]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:18:27.316463 ntpd[2061]: ---------------------------------------------------- Apr 13 20:18:27.337516 jq[2091]: true Apr 13 20:18:27.337748 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: proto: precision = 0.075 usec (-24) Apr 13 20:18:27.337748 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: basedate set to 2026-04-01 Apr 13 20:18:27.337748 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: gps base set to 2026-04-05 (week 2413) Apr 13 20:18:27.324837 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:18:27.316475 ntpd[2061]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:18:27.316486 ntpd[2061]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:18:27.316496 ntpd[2061]: corporation. Support and training for ntp-4 are Apr 13 20:18:27.354319 update_engine[2087]: I20260413 20:18:27.352890 2087 main.cc:92] Flatcar Update Engine starting Apr 13 20:18:27.342692 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:18:27.316506 ntpd[2061]: available at https://www.nwtime.org/support Apr 13 20:18:27.343043 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:18:27.316518 ntpd[2061]: ---------------------------------------------------- Apr 13 20:18:27.323615 ntpd[2061]: proto: precision = 0.075 usec (-24) Apr 13 20:18:27.328346 ntpd[2061]: basedate set to 2026-04-01 Apr 13 20:18:27.328368 ntpd[2061]: gps base set to 2026-04-05 (week 2413) Apr 13 20:18:27.359160 update_engine[2087]: I20260413 20:18:27.357450 2087 update_check_scheduler.cc:74] Next update check in 10m56s Apr 13 20:18:27.362978 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:18:27.363355 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:18:27.373432 ntpd[2061]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen normally on 3 eth0 172.31.17.28:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen normally on 4 lo [::1]:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listen normally on 5 eth0 [fe80::44c:48ff:fe27:26f3%2]:123 Apr 13 20:18:27.376305 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: Listening on routing socket on fd #22 for interface updates Apr 13 20:18:27.373495 ntpd[2061]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:18:27.373698 ntpd[2061]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:18:27.373741 ntpd[2061]: Listen normally on 3 eth0 172.31.17.28:123 Apr 13 20:18:27.373785 ntpd[2061]: Listen normally on 4 lo [::1]:123 Apr 13 20:18:27.373829 ntpd[2061]: Listen normally on 5 eth0 [fe80::44c:48ff:fe27:26f3%2]:123 Apr 13 20:18:27.373870 ntpd[2061]: Listening on routing socket on fd #22 for interface updates Apr 13 20:18:27.402296 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 20:18:27.420325 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:18:27.420325 ntpd[2061]: 13 Apr 20:18:27 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:18:27.399832 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:18:27.415355 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:18:27.399870 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:18:27.432388 extend-filesystems[2084]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 20:18:27.432388 extend-filesystems[2084]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:18:27.432388 extend-filesystems[2084]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 20:18:27.456063 jq[2103]: true Apr 13 20:18:27.466340 extend-filesystems[2054]: Resized filesystem in /dev/nvme0n1p9 Apr 13 20:18:27.472886 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:18:27.474355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:18:27.476746 (ntainerd)[2111]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:18:27.507027 systemd-logind[2083]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:18:27.507054 systemd-logind[2083]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 13 20:18:27.507077 systemd-logind[2083]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:18:27.511929 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:18:27.522182 systemd-logind[2083]: New seat seat0. Apr 13 20:18:27.537499 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:18:27.541533 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:18:27.560310 dbus-daemon[2051]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:18:27.564584 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:18:27.566150 tar[2101]: linux-amd64/LICENSE Apr 13 20:18:27.576008 tar[2101]: linux-amd64/helm Apr 13 20:18:27.584317 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 20:18:27.588773 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:18:27.591028 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:18:27.591391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:18:27.611076 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:18:27.614051 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:18:27.614303 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:18:27.618566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:18:27.622467 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:18:27.668962 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1656) Apr 13 20:18:27.720038 bash[2163]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:18:27.720903 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:18:27.744542 systemd[1]: Starting sshkeys.service... Apr 13 20:18:27.838558 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:18:27.848127 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:18:27.979182 amazon-ssm-agent[2154]: Initializing new seelog logger Apr 13 20:18:27.985891 amazon-ssm-agent[2154]: New Seelog Logger Creation Complete Apr 13 20:18:27.985891 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.985891 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.986332 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 processing appconfig overrides Apr 13 20:18:27.988787 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.988787 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.990026 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 processing appconfig overrides Apr 13 20:18:27.997125 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.997125 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:27.997125 amazon-ssm-agent[2154]: 2026/04/13 20:18:27 processing appconfig overrides Apr 13 20:18:28.001089 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO Proxy environment variables: Apr 13 20:18:28.012121 amazon-ssm-agent[2154]: 2026/04/13 20:18:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:28.012121 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:18:28.012121 amazon-ssm-agent[2154]: 2026/04/13 20:18:28 processing appconfig overrides Apr 13 20:18:28.080282 coreos-metadata[2194]: Apr 13 20:18:28.080 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:18:28.081965 coreos-metadata[2194]: Apr 13 20:18:28.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 20:18:28.085126 coreos-metadata[2194]: Apr 13 20:18:28.082 INFO Fetch successful Apr 13 20:18:28.085126 coreos-metadata[2194]: Apr 13 20:18:28.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 20:18:28.085666 coreos-metadata[2194]: Apr 13 20:18:28.085 INFO Fetch successful Apr 13 20:18:28.093930 unknown[2194]: wrote ssh authorized keys file for user: core Apr 13 20:18:28.108328 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO no_proxy: Apr 13 20:18:28.161182 locksmithd[2162]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:18:28.182673 update-ssh-keys[2230]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:18:28.184236 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:18:28.209121 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO https_proxy: Apr 13 20:18:28.231340 systemd[1]: Finished sshkeys.service. Apr 13 20:18:28.278678 dbus-daemon[2051]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:18:28.279632 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:18:28.286708 dbus-daemon[2051]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2158 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:18:28.300246 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:18:28.309484 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO http_proxy: Apr 13 20:18:28.367900 polkitd[2268]: Started polkitd version 121 Apr 13 20:18:28.406120 polkitd[2268]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:18:28.406214 polkitd[2268]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:18:28.415978 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO Checking if agent identity type OnPrem can be assumed Apr 13 20:18:28.422207 polkitd[2268]: Finished loading, compiling and executing 2 rules Apr 13 20:18:28.437733 dbus-daemon[2051]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:18:28.437927 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:18:28.439588 polkitd[2268]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:18:28.517174 amazon-ssm-agent[2154]: 2026-04-13 20:18:27 INFO Checking if agent identity type EC2 can be assumed Apr 13 20:18:28.517592 systemd-resolved[1989]: System hostname changed to 'ip-172-31-17-28'. Apr 13 20:18:28.517688 systemd-hostnamed[2158]: Hostname set to (transient) Apr 13 20:18:28.556004 containerd[2111]: time="2026-04-13T20:18:28.555892162Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:18:28.615853 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO Agent will take identity from EC2 Apr 13 20:18:28.655817 sshd_keygen[2096]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:18:28.690243 containerd[2111]: time="2026-04-13T20:18:28.690155167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.693358 containerd[2111]: time="2026-04-13T20:18:28.693304001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:18:28.693494 containerd[2111]: time="2026-04-13T20:18:28.693476864Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:18:28.693960 containerd[2111]: time="2026-04-13T20:18:28.693556493Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:18:28.693960 containerd[2111]: time="2026-04-13T20:18:28.693758255Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:18:28.693960 containerd[2111]: time="2026-04-13T20:18:28.693785691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.693960 containerd[2111]: time="2026-04-13T20:18:28.693865685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:18:28.693960 containerd[2111]: time="2026-04-13T20:18:28.693883897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694540851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694569360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694592471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694608879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694721765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695036 containerd[2111]: time="2026-04-13T20:18:28.694995827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695613 containerd[2111]: time="2026-04-13T20:18:28.695586465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:18:28.695762 containerd[2111]: time="2026-04-13T20:18:28.695742661Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:18:28.697554 containerd[2111]: time="2026-04-13T20:18:28.697263612Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:18:28.697554 containerd[2111]: time="2026-04-13T20:18:28.697344839Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:18:28.709155 containerd[2111]: time="2026-04-13T20:18:28.709083988Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:18:28.710209 containerd[2111]: time="2026-04-13T20:18:28.710183487Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:18:28.710749 containerd[2111]: time="2026-04-13T20:18:28.710394392Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:18:28.710749 containerd[2111]: time="2026-04-13T20:18:28.710432742Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:18:28.710749 containerd[2111]: time="2026-04-13T20:18:28.710457682Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:18:28.710749 containerd[2111]: time="2026-04-13T20:18:28.710641945Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713290024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713483178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713512510Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713532334Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713555721Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713575669Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.713654 containerd[2111]: time="2026-04-13T20:18:28.713595747Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714073361Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714123673Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714143829Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714160850Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714178207Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714209238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714230817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714250190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714272036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714292872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714318644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714337692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714364703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715230 containerd[2111]: time="2026-04-13T20:18:28.714384353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715831 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714423796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714443018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714463063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714486283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714508725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714540476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714558877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714575829Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714635789Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714661633Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714678396Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714696194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:18:28.715878 containerd[2111]: time="2026-04-13T20:18:28.714711733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.716373 containerd[2111]: time="2026-04-13T20:18:28.714740784Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:18:28.716373 containerd[2111]: time="2026-04-13T20:18:28.714755491Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:18:28.716373 containerd[2111]: time="2026-04-13T20:18:28.714770416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.719752180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.719867674Z" level=info msg="Connect containerd service" Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.719938587Z" level=info msg="using legacy CRI server" Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.719950142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.720095414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:18:28.722653 containerd[2111]: time="2026-04-13T20:18:28.720832752Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723325455Z" level=info msg="Start subscribing containerd event" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723401348Z" level=info msg="Start recovering state" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723486579Z" level=info msg="Start event monitor" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723506856Z" level=info msg="Start snapshots syncer" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723521226Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723532546Z" level=info msg="Start streaming server" Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723886605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.723945654Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:18:28.726407 containerd[2111]: time="2026-04-13T20:18:28.724018764Z" level=info msg="containerd successfully booted in 0.174220s" Apr 13 20:18:28.724281 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:18:28.759733 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:18:28.770501 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:18:28.799041 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:18:28.799424 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:18:28.815385 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:18:28.815572 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:18:28.864936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:18:28.880843 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:18:28.892266 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:18:28.893273 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:18:28.915973 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:18:29.013786 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 20:18:29.114081 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 13 20:18:29.217986 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 20:18:29.222438 tar[2101]: linux-amd64/README.md Apr 13 20:18:29.237487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:18:29.239823 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 20:18:29.241970 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [Registrar] Starting registrar module Apr 13 20:18:29.241970 amazon-ssm-agent[2154]: 2026-04-13 20:18:28 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 20:18:29.242085 amazon-ssm-agent[2154]: 2026-04-13 20:18:29 INFO [EC2Identity] EC2 registration was successful. Apr 13 20:18:29.242085 amazon-ssm-agent[2154]: 2026-04-13 20:18:29 INFO [CredentialRefresher] credentialRefresher has started Apr 13 20:18:29.242085 amazon-ssm-agent[2154]: 2026-04-13 20:18:29 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 20:18:29.242085 amazon-ssm-agent[2154]: 2026-04-13 20:18:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 20:18:29.318691 amazon-ssm-agent[2154]: 2026-04-13 20:18:29 INFO [CredentialRefresher] Next credential rotation will be in 31.808289312133333 minutes Apr 13 20:18:29.768361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:18:29.769686 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:18:29.769705 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:18:29.771246 systemd[1]: Startup finished in 6.708s (kernel) + 7.042s (userspace) = 13.750s. Apr 13 20:18:30.257938 amazon-ssm-agent[2154]: 2026-04-13 20:18:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 20:18:30.358796 amazon-ssm-agent[2154]: 2026-04-13 20:18:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2342) started Apr 13 20:18:30.459861 amazon-ssm-agent[2154]: 2026-04-13 20:18:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 20:18:30.533752 kubelet[2332]: E0413 20:18:30.533644 2332 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:18:30.536362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:18:30.536641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:18:31.748122 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:18:31.753441 systemd[1]: Started sshd@0-172.31.17.28:22-50.85.169.122:51360.service - OpenSSH per-connection server daemon (50.85.169.122:51360). Apr 13 20:18:32.712996 sshd[2356]: Accepted publickey for core from 50.85.169.122 port 51360 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:32.715652 sshd[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:32.725416 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:18:32.730711 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:18:32.734206 systemd-logind[2083]: New session 1 of user core. Apr 13 20:18:32.750333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:18:32.762487 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:18:32.766061 (systemd)[2363]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:18:32.883130 systemd[2363]: Queued start job for default target default.target. Apr 13 20:18:32.883631 systemd[2363]: Created slice app.slice - User Application Slice. Apr 13 20:18:32.883662 systemd[2363]: Reached target paths.target - Paths. Apr 13 20:18:32.883680 systemd[2363]: Reached target timers.target - Timers. Apr 13 20:18:32.895290 systemd[2363]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:18:32.902733 systemd[2363]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:18:32.902816 systemd[2363]: Reached target sockets.target - Sockets. Apr 13 20:18:32.902837 systemd[2363]: Reached target basic.target - Basic System. Apr 13 20:18:32.902890 systemd[2363]: Reached target default.target - Main User Target. Apr 13 20:18:32.902928 systemd[2363]: Startup finished in 129ms. Apr 13 20:18:32.904471 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:18:32.911437 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:18:33.596581 systemd[1]: Started sshd@1-172.31.17.28:22-50.85.169.122:51366.service - OpenSSH per-connection server daemon (50.85.169.122:51366). Apr 13 20:18:35.598167 systemd-resolved[1989]: Clock change detected. Flushing caches. Apr 13 20:18:35.853589 sshd[2375]: Accepted publickey for core from 50.85.169.122 port 51366 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:35.855073 sshd[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:35.860280 systemd-logind[2083]: New session 2 of user core. Apr 13 20:18:35.866588 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:18:36.533203 sshd[2375]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:36.538454 systemd[1]: sshd@1-172.31.17.28:22-50.85.169.122:51366.service: Deactivated successfully. Apr 13 20:18:36.541877 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:18:36.543358 systemd-logind[2083]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:18:36.544512 systemd-logind[2083]: Removed session 2. Apr 13 20:18:36.716583 systemd[1]: Started sshd@2-172.31.17.28:22-50.85.169.122:51378.service - OpenSSH per-connection server daemon (50.85.169.122:51378). Apr 13 20:18:37.741038 sshd[2383]: Accepted publickey for core from 50.85.169.122 port 51378 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:37.742551 sshd[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:37.747719 systemd-logind[2083]: New session 3 of user core. Apr 13 20:18:37.753503 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:18:38.448913 sshd[2383]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:38.453453 systemd[1]: sshd@2-172.31.17.28:22-50.85.169.122:51378.service: Deactivated successfully. Apr 13 20:18:38.457569 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:18:38.458407 systemd-logind[2083]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:18:38.459396 systemd-logind[2083]: Removed session 3. Apr 13 20:18:38.615524 systemd[1]: Started sshd@3-172.31.17.28:22-50.85.169.122:51382.service - OpenSSH per-connection server daemon (50.85.169.122:51382). Apr 13 20:18:39.604124 sshd[2391]: Accepted publickey for core from 50.85.169.122 port 51382 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:39.604918 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:39.610219 systemd-logind[2083]: New session 4 of user core. Apr 13 20:18:39.618655 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:18:40.291652 sshd[2391]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:40.297789 systemd[1]: sshd@3-172.31.17.28:22-50.85.169.122:51382.service: Deactivated successfully. Apr 13 20:18:40.298225 systemd-logind[2083]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:18:40.302056 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:18:40.303190 systemd-logind[2083]: Removed session 4. Apr 13 20:18:40.471528 systemd[1]: Started sshd@4-172.31.17.28:22-50.85.169.122:50630.service - OpenSSH per-connection server daemon (50.85.169.122:50630). Apr 13 20:18:41.488926 sshd[2399]: Accepted publickey for core from 50.85.169.122 port 50630 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:41.489641 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:41.494827 systemd-logind[2083]: New session 5 of user core. Apr 13 20:18:41.502552 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:18:42.042755 sudo[2403]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:18:42.043179 sudo[2403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:18:42.044242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:18:42.052745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:18:42.060995 sudo[2403]: pam_unix(sudo:session): session closed for user root Apr 13 20:18:42.228884 sshd[2399]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:42.232731 systemd-logind[2083]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:18:42.233060 systemd[1]: sshd@4-172.31.17.28:22-50.85.169.122:50630.service: Deactivated successfully. Apr 13 20:18:42.243101 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:18:42.245906 systemd-logind[2083]: Removed session 5. Apr 13 20:18:42.302363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:18:42.306785 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:18:42.351571 kubelet[2420]: E0413 20:18:42.351533 2420 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:18:42.355773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:18:42.356075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:18:42.399997 systemd[1]: Started sshd@5-172.31.17.28:22-50.85.169.122:50646.service - OpenSSH per-connection server daemon (50.85.169.122:50646). Apr 13 20:18:43.382426 sshd[2428]: Accepted publickey for core from 50.85.169.122 port 50646 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:43.383962 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:43.389255 systemd-logind[2083]: New session 6 of user core. Apr 13 20:18:43.396551 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:18:43.904464 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:18:43.904858 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:18:43.908937 sudo[2433]: pam_unix(sudo:session): session closed for user root Apr 13 20:18:43.914293 sudo[2432]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:18:43.914681 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:18:43.934692 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:18:43.936512 auditctl[2436]: No rules Apr 13 20:18:43.937198 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:18:43.937543 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:18:43.942859 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:18:43.977037 augenrules[2455]: No rules Apr 13 20:18:43.978849 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:18:43.983452 sudo[2432]: pam_unix(sudo:session): session closed for user root Apr 13 20:18:44.142802 sshd[2428]: pam_unix(sshd:session): session closed for user core Apr 13 20:18:44.146528 systemd[1]: sshd@5-172.31.17.28:22-50.85.169.122:50646.service: Deactivated successfully. Apr 13 20:18:44.150179 systemd-logind[2083]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:18:44.151224 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:18:44.153399 systemd-logind[2083]: Removed session 6. Apr 13 20:18:44.304549 systemd[1]: Started sshd@6-172.31.17.28:22-50.85.169.122:50656.service - OpenSSH per-connection server daemon (50.85.169.122:50656). Apr 13 20:18:45.251243 sshd[2464]: Accepted publickey for core from 50.85.169.122 port 50656 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:18:45.252738 sshd[2464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:18:45.257591 systemd-logind[2083]: New session 7 of user core. Apr 13 20:18:45.264588 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:18:45.757804 sudo[2468]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:18:45.758302 sudo[2468]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:18:46.127510 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:18:46.130251 (dockerd)[2484]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:18:46.490870 dockerd[2484]: time="2026-04-13T20:18:46.490737472Z" level=info msg="Starting up" Apr 13 20:18:46.740408 dockerd[2484]: time="2026-04-13T20:18:46.740352652Z" level=info msg="Loading containers: start." Apr 13 20:18:46.868163 kernel: Initializing XFRM netlink socket Apr 13 20:18:46.898519 (udev-worker)[2550]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:18:46.953372 systemd-networkd[1657]: docker0: Link UP Apr 13 20:18:46.976907 dockerd[2484]: time="2026-04-13T20:18:46.976729418Z" level=info msg="Loading containers: done." Apr 13 20:18:47.003686 dockerd[2484]: time="2026-04-13T20:18:47.003628381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:18:47.003928 dockerd[2484]: time="2026-04-13T20:18:47.003776991Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:18:47.003928 dockerd[2484]: time="2026-04-13T20:18:47.003917610Z" level=info msg="Daemon has completed initialization" Apr 13 20:18:47.071814 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:18:47.072915 dockerd[2484]: time="2026-04-13T20:18:47.072294684Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:18:47.722495 containerd[2111]: time="2026-04-13T20:18:47.722453578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:18:48.356416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768020128.mount: Deactivated successfully. Apr 13 20:18:50.253815 containerd[2111]: time="2026-04-13T20:18:50.253762291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:50.255922 containerd[2111]: time="2026-04-13T20:18:50.255840053Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29989419" Apr 13 20:18:50.258502 containerd[2111]: time="2026-04-13T20:18:50.258424156Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:50.263096 containerd[2111]: time="2026-04-13T20:18:50.263046119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:50.264750 containerd[2111]: time="2026-04-13T20:18:50.264508375Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.542009456s" Apr 13 20:18:50.264750 containerd[2111]: time="2026-04-13T20:18:50.264559670Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:18:50.265879 containerd[2111]: time="2026-04-13T20:18:50.265514788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:18:52.167684 containerd[2111]: time="2026-04-13T20:18:52.167637195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:52.169645 containerd[2111]: time="2026-04-13T20:18:52.169455369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021909" Apr 13 20:18:52.172086 containerd[2111]: time="2026-04-13T20:18:52.171551082Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:52.176081 containerd[2111]: time="2026-04-13T20:18:52.176038837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:52.177591 containerd[2111]: time="2026-04-13T20:18:52.177549485Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.911996947s" Apr 13 20:18:52.177686 containerd[2111]: time="2026-04-13T20:18:52.177596557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:18:52.178878 containerd[2111]: time="2026-04-13T20:18:52.178855862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:18:52.606309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:18:52.620437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:18:52.862396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:18:52.869217 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:18:52.918064 kubelet[2695]: E0413 20:18:52.918012 2695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:18:52.920499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:18:52.920896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:18:54.182930 containerd[2111]: time="2026-04-13T20:18:54.182874012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:54.185392 containerd[2111]: time="2026-04-13T20:18:54.185111276Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162753" Apr 13 20:18:54.187932 containerd[2111]: time="2026-04-13T20:18:54.187702243Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:54.192227 containerd[2111]: time="2026-04-13T20:18:54.192179429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:54.193766 containerd[2111]: time="2026-04-13T20:18:54.193580127Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 2.014599882s" Apr 13 20:18:54.193766 containerd[2111]: time="2026-04-13T20:18:54.193626015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:18:54.194686 containerd[2111]: time="2026-04-13T20:18:54.194315306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:18:55.368432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908841830.mount: Deactivated successfully. Apr 13 20:18:55.973846 containerd[2111]: time="2026-04-13T20:18:55.973787069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:55.975792 containerd[2111]: time="2026-04-13T20:18:55.975723892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828763" Apr 13 20:18:55.978161 containerd[2111]: time="2026-04-13T20:18:55.978070920Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:55.981825 containerd[2111]: time="2026-04-13T20:18:55.981552528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:55.982519 containerd[2111]: time="2026-04-13T20:18:55.982361341Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.788005444s" Apr 13 20:18:55.982519 containerd[2111]: time="2026-04-13T20:18:55.982405017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:18:55.983065 containerd[2111]: time="2026-04-13T20:18:55.983035520Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:18:56.637099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209606933.mount: Deactivated successfully. Apr 13 20:18:57.906522 containerd[2111]: time="2026-04-13T20:18:57.906466008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.907929 containerd[2111]: time="2026-04-13T20:18:57.907877382Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 13 20:18:57.909167 containerd[2111]: time="2026-04-13T20:18:57.908713455Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.911791 containerd[2111]: time="2026-04-13T20:18:57.911711990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:57.913279 containerd[2111]: time="2026-04-13T20:18:57.913098889Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.930028337s" Apr 13 20:18:57.913279 containerd[2111]: time="2026-04-13T20:18:57.913160120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:18:57.914074 containerd[2111]: time="2026-04-13T20:18:57.914030079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:18:58.498757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660327387.mount: Deactivated successfully. Apr 13 20:18:58.511377 containerd[2111]: time="2026-04-13T20:18:58.511321749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:58.513460 containerd[2111]: time="2026-04-13T20:18:58.513231567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 13 20:18:58.515612 containerd[2111]: time="2026-04-13T20:18:58.515545113Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:58.519317 containerd[2111]: time="2026-04-13T20:18:58.519248792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:18:58.521028 containerd[2111]: time="2026-04-13T20:18:58.520086395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.01433ms" Apr 13 20:18:58.521028 containerd[2111]: time="2026-04-13T20:18:58.520130809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:18:58.521028 containerd[2111]: time="2026-04-13T20:18:58.520951659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:18:59.089674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160566296.mount: Deactivated successfully. Apr 13 20:18:59.803322 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:19:00.501774 containerd[2111]: time="2026-04-13T20:19:00.501716123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.503901 containerd[2111]: time="2026-04-13T20:19:00.503636065Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Apr 13 20:19:00.506296 containerd[2111]: time="2026-04-13T20:19:00.505991993Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.510371 containerd[2111]: time="2026-04-13T20:19:00.510309830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:00.511677 containerd[2111]: time="2026-04-13T20:19:00.511478647Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.990490214s" Apr 13 20:19:00.511677 containerd[2111]: time="2026-04-13T20:19:00.511525699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:19:03.146599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:19:03.156097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:03.454346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:03.455819 (kubelet)[2872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:19:03.523965 kubelet[2872]: E0413 20:19:03.523917 2872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:19:03.526696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:19:03.526940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:19:05.349916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:05.362515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:05.400124 systemd[1]: Reloading requested from client PID 2889 ('systemctl') (unit session-7.scope)... Apr 13 20:19:05.400163 systemd[1]: Reloading... Apr 13 20:19:05.512449 zram_generator::config[2929]: No configuration found. Apr 13 20:19:05.682270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:19:05.766463 systemd[1]: Reloading finished in 365 ms. Apr 13 20:19:05.830400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:05.836675 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:19:05.837163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:05.847634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:06.049407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:06.052475 (kubelet)[3007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:19:06.110128 kubelet[3007]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:19:06.110574 kubelet[3007]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:19:06.110574 kubelet[3007]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:19:06.110574 kubelet[3007]: I0413 20:19:06.110282 3007 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:19:06.581030 kubelet[3007]: I0413 20:19:06.580986 3007 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:19:06.581030 kubelet[3007]: I0413 20:19:06.581017 3007 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:19:06.581351 kubelet[3007]: I0413 20:19:06.581328 3007 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:19:06.635732 kubelet[3007]: E0413 20:19:06.635586 3007 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:19:06.639161 kubelet[3007]: I0413 20:19:06.637001 3007 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:19:06.648047 kubelet[3007]: E0413 20:19:06.648000 3007 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:19:06.648047 kubelet[3007]: I0413 20:19:06.648046 3007 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:19:06.659026 kubelet[3007]: I0413 20:19:06.658990 3007 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:19:06.664247 kubelet[3007]: I0413 20:19:06.664183 3007 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:19:06.667756 kubelet[3007]: I0413 20:19:06.664243 3007 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:19:06.668635 kubelet[3007]: I0413 20:19:06.668605 3007 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:19:06.668635 kubelet[3007]: I0413 20:19:06.668637 3007 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:19:06.670358 kubelet[3007]: I0413 20:19:06.670329 3007 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:19:06.678958 kubelet[3007]: I0413 20:19:06.678921 3007 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:19:06.678958 kubelet[3007]: I0413 20:19:06.678956 3007 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:19:06.679273 kubelet[3007]: I0413 20:19:06.678991 3007 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:19:06.681579 kubelet[3007]: I0413 20:19:06.681250 3007 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:19:06.688975 kubelet[3007]: E0413 20:19:06.688682 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:19:06.691164 kubelet[3007]: I0413 20:19:06.691095 3007 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:19:06.692072 kubelet[3007]: I0413 20:19:06.691979 3007 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:19:06.693891 kubelet[3007]: W0413 20:19:06.693217 3007 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:19:06.701095 kubelet[3007]: E0413 20:19:06.700817 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:19:06.704286 kubelet[3007]: I0413 20:19:06.704252 3007 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:19:06.704414 kubelet[3007]: I0413 20:19:06.704339 3007 server.go:1289] "Started kubelet" Apr 13 20:19:06.705242 kubelet[3007]: I0413 20:19:06.704501 3007 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:19:06.706201 kubelet[3007]: I0413 20:19:06.705747 3007 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:19:06.708751 kubelet[3007]: I0413 20:19:06.707974 3007 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:19:06.708751 kubelet[3007]: I0413 20:19:06.708445 3007 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:19:06.710380 kubelet[3007]: E0413 20:19:06.708603 3007 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.28:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-28.18a6040a9ec773ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-28,UID:ip-172-31-17-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-28,},FirstTimestamp:2026-04-13 20:19:06.704294862 +0000 UTC m=+0.645725548,LastTimestamp:2026-04-13 20:19:06.704294862 +0000 UTC m=+0.645725548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-28,}" Apr 13 20:19:06.711402 kubelet[3007]: I0413 20:19:06.711375 3007 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:19:06.713461 kubelet[3007]: I0413 20:19:06.712570 3007 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:19:06.716169 kubelet[3007]: E0413 20:19:06.715410 3007 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-28\" not found" Apr 13 20:19:06.716169 kubelet[3007]: I0413 20:19:06.715453 3007 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:19:06.716169 kubelet[3007]: I0413 20:19:06.715689 3007 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:19:06.716169 kubelet[3007]: I0413 20:19:06.715741 3007 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:19:06.716392 kubelet[3007]: E0413 20:19:06.716260 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:19:06.716596 kubelet[3007]: E0413 20:19:06.716563 3007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="200ms" Apr 13 20:19:06.722833 kubelet[3007]: I0413 20:19:06.722805 3007 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:19:06.723128 kubelet[3007]: I0413 20:19:06.723091 3007 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:19:06.725676 kubelet[3007]: I0413 20:19:06.725652 3007 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:19:06.755588 kubelet[3007]: E0413 20:19:06.754730 3007 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:19:06.758168 kubelet[3007]: I0413 20:19:06.757289 3007 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:19:06.758878 kubelet[3007]: I0413 20:19:06.758846 3007 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:19:06.758878 kubelet[3007]: I0413 20:19:06.758877 3007 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:19:06.759011 kubelet[3007]: I0413 20:19:06.758904 3007 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:19:06.759011 kubelet[3007]: I0413 20:19:06.758914 3007 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:19:06.759011 kubelet[3007]: E0413 20:19:06.758960 3007 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:19:06.766409 kubelet[3007]: E0413 20:19:06.766374 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:19:06.766781 kubelet[3007]: I0413 20:19:06.766762 3007 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:19:06.766781 kubelet[3007]: I0413 20:19:06.766780 3007 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:19:06.766895 kubelet[3007]: I0413 20:19:06.766797 3007 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:19:06.771989 kubelet[3007]: I0413 20:19:06.771954 3007 policy_none.go:49] "None policy: Start" Apr 13 20:19:06.771989 kubelet[3007]: I0413 20:19:06.771983 3007 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:19:06.772187 kubelet[3007]: I0413 20:19:06.772001 3007 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:19:06.779364 kubelet[3007]: E0413 20:19:06.779324 3007 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:19:06.779566 kubelet[3007]: I0413 20:19:06.779546 3007 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:19:06.779623 kubelet[3007]: I0413 20:19:06.779564 3007 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:19:06.780948 kubelet[3007]: I0413 20:19:06.780917 3007 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:19:06.784186 kubelet[3007]: E0413 20:19:06.784130 3007 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:19:06.784309 kubelet[3007]: E0413 20:19:06.784216 3007 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-28\" not found" Apr 13 20:19:06.869166 kubelet[3007]: E0413 20:19:06.869041 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:06.877558 kubelet[3007]: E0413 20:19:06.877499 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:06.880494 kubelet[3007]: I0413 20:19:06.880473 3007 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:06.881243 kubelet[3007]: E0413 20:19:06.881129 3007 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Apr 13 20:19:06.882632 kubelet[3007]: E0413 20:19:06.882607 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:06.918164 kubelet[3007]: I0413 20:19:06.917364 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:06.918164 kubelet[3007]: I0413 20:19:06.917416 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cfe737b4c94470ae4b8010d35ce2ffb-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-28\" (UID: \"3cfe737b4c94470ae4b8010d35ce2ffb\") " pod="kube-system/kube-scheduler-ip-172-31-17-28" Apr 13 20:19:06.918164 kubelet[3007]: I0413 20:19:06.917471 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-ca-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:06.918164 kubelet[3007]: I0413 20:19:06.917494 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:06.918164 kubelet[3007]: I0413 20:19:06.917520 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:06.918400 kubelet[3007]: I0413 20:19:06.917545 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:06.918400 kubelet[3007]: I0413 20:19:06.917571 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:06.918400 kubelet[3007]: I0413 20:19:06.917592 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:06.918400 kubelet[3007]: E0413 20:19:06.917657 3007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="400ms" Apr 13 20:19:06.918400 kubelet[3007]: I0413 20:19:06.918100 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:07.083195 kubelet[3007]: I0413 20:19:07.083163 3007 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:07.083531 kubelet[3007]: E0413 20:19:07.083501 3007 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Apr 13 20:19:07.170766 containerd[2111]: time="2026-04-13T20:19:07.170725018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-28,Uid:71af294dce043cdafc8c5ad318791fc9,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:07.179049 containerd[2111]: time="2026-04-13T20:19:07.179005155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-28,Uid:5eee871cc9a80acc7e8d0b92c917d727,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:07.184586 containerd[2111]: time="2026-04-13T20:19:07.184213296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-28,Uid:3cfe737b4c94470ae4b8010d35ce2ffb,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:07.318310 kubelet[3007]: E0413 20:19:07.318266 3007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="800ms" Apr 13 20:19:07.485730 kubelet[3007]: I0413 20:19:07.485622 3007 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:07.486039 kubelet[3007]: E0413 20:19:07.485978 3007 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Apr 13 20:19:07.737172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561664437.mount: Deactivated successfully. Apr 13 20:19:07.750785 containerd[2111]: time="2026-04-13T20:19:07.750725868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:19:07.754824 containerd[2111]: time="2026-04-13T20:19:07.754756977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:19:07.756972 containerd[2111]: time="2026-04-13T20:19:07.756788448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:19:07.758793 kubelet[3007]: E0413 20:19:07.758737 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:19:07.759128 containerd[2111]: time="2026-04-13T20:19:07.759020604Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:19:07.761571 containerd[2111]: time="2026-04-13T20:19:07.761532140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:19:07.763970 containerd[2111]: time="2026-04-13T20:19:07.763825777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:19:07.766255 containerd[2111]: time="2026-04-13T20:19:07.766196505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 13 20:19:07.768224 containerd[2111]: time="2026-04-13T20:19:07.768180602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:19:07.769612 containerd[2111]: time="2026-04-13T20:19:07.769566296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 590.479964ms" Apr 13 20:19:07.773450 containerd[2111]: time="2026-04-13T20:19:07.773408129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.117081ms" Apr 13 20:19:07.787372 containerd[2111]: time="2026-04-13T20:19:07.787317281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.504506ms" Apr 13 20:19:07.801706 kubelet[3007]: E0413 20:19:07.801659 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:19:07.902705 kubelet[3007]: E0413 20:19:07.902443 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:19:08.004761 containerd[2111]: time="2026-04-13T20:19:08.004340904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:08.004761 containerd[2111]: time="2026-04-13T20:19:08.004415559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:08.004761 containerd[2111]: time="2026-04-13T20:19:08.004440788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.004761 containerd[2111]: time="2026-04-13T20:19:08.004571324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.010147 containerd[2111]: time="2026-04-13T20:19:08.009668689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:08.010147 containerd[2111]: time="2026-04-13T20:19:08.009817135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:08.010147 containerd[2111]: time="2026-04-13T20:19:08.009904744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.011804 containerd[2111]: time="2026-04-13T20:19:08.011539801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.026880 containerd[2111]: time="2026-04-13T20:19:08.024168691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:08.026880 containerd[2111]: time="2026-04-13T20:19:08.026359790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:08.026880 containerd[2111]: time="2026-04-13T20:19:08.026386412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.026880 containerd[2111]: time="2026-04-13T20:19:08.026517706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:08.102063 kubelet[3007]: E0413 20:19:08.102006 3007 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:19:08.120397 kubelet[3007]: E0413 20:19:08.119638 3007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="1.6s" Apr 13 20:19:08.135826 containerd[2111]: time="2026-04-13T20:19:08.135784135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-28,Uid:71af294dce043cdafc8c5ad318791fc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dcbb361e5fe456626fa0568dab376e225f5e4b81368aa31ac8039bdfd3e958f\"" Apr 13 20:19:08.150984 containerd[2111]: time="2026-04-13T20:19:08.150940382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-28,Uid:3cfe737b4c94470ae4b8010d35ce2ffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ecf787dbb59bf4612b605d52024e581795975d6ffdcd256f709702058c2ef3\"" Apr 13 20:19:08.151375 containerd[2111]: time="2026-04-13T20:19:08.151344068Z" level=info msg="CreateContainer within sandbox \"1dcbb361e5fe456626fa0568dab376e225f5e4b81368aa31ac8039bdfd3e958f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:19:08.163114 containerd[2111]: time="2026-04-13T20:19:08.162942651Z" level=info msg="CreateContainer within sandbox \"22ecf787dbb59bf4612b605d52024e581795975d6ffdcd256f709702058c2ef3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:19:08.167169 containerd[2111]: time="2026-04-13T20:19:08.166260079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-28,Uid:5eee871cc9a80acc7e8d0b92c917d727,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a63942a8f80ebf183dddd21ee0907fb0138302500577d616cd0d039256afbe9\"" Apr 13 20:19:08.174970 containerd[2111]: time="2026-04-13T20:19:08.174935321Z" level=info msg="CreateContainer within sandbox \"3a63942a8f80ebf183dddd21ee0907fb0138302500577d616cd0d039256afbe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:19:08.200258 containerd[2111]: time="2026-04-13T20:19:08.200208802Z" level=info msg="CreateContainer within sandbox \"1dcbb361e5fe456626fa0568dab376e225f5e4b81368aa31ac8039bdfd3e958f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96b2ca3a3914fdedc4637ba5e8d59f86dc13c06fe1a157ca59f8bb3b51fedad3\"" Apr 13 20:19:08.201499 containerd[2111]: time="2026-04-13T20:19:08.201462170Z" level=info msg="StartContainer for \"96b2ca3a3914fdedc4637ba5e8d59f86dc13c06fe1a157ca59f8bb3b51fedad3\"" Apr 13 20:19:08.216999 containerd[2111]: time="2026-04-13T20:19:08.216831894Z" level=info msg="CreateContainer within sandbox \"3a63942a8f80ebf183dddd21ee0907fb0138302500577d616cd0d039256afbe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5\"" Apr 13 20:19:08.217663 containerd[2111]: time="2026-04-13T20:19:08.217473264Z" level=info msg="StartContainer for \"095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5\"" Apr 13 20:19:08.219845 containerd[2111]: time="2026-04-13T20:19:08.219807736Z" level=info msg="CreateContainer within sandbox \"22ecf787dbb59bf4612b605d52024e581795975d6ffdcd256f709702058c2ef3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139\"" Apr 13 20:19:08.220423 containerd[2111]: time="2026-04-13T20:19:08.220393611Z" level=info msg="StartContainer for \"12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139\"" Apr 13 20:19:08.291310 kubelet[3007]: I0413 20:19:08.289549 3007 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:08.291310 kubelet[3007]: E0413 20:19:08.289918 3007 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Apr 13 20:19:08.341985 containerd[2111]: time="2026-04-13T20:19:08.341935606Z" level=info msg="StartContainer for \"96b2ca3a3914fdedc4637ba5e8d59f86dc13c06fe1a157ca59f8bb3b51fedad3\" returns successfully" Apr 13 20:19:08.375733 containerd[2111]: time="2026-04-13T20:19:08.375671361Z" level=info msg="StartContainer for \"095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5\" returns successfully" Apr 13 20:19:08.375935 containerd[2111]: time="2026-04-13T20:19:08.375671681Z" level=info msg="StartContainer for \"12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139\" returns successfully" Apr 13 20:19:08.785069 kubelet[3007]: E0413 20:19:08.784168 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:08.791176 kubelet[3007]: E0413 20:19:08.789718 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:08.792786 kubelet[3007]: E0413 20:19:08.792759 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:08.836336 kubelet[3007]: E0413 20:19:08.836295 3007 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:19:09.793982 kubelet[3007]: E0413 20:19:09.793939 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:09.794609 kubelet[3007]: E0413 20:19:09.794586 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:09.795246 kubelet[3007]: E0413 20:19:09.795222 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:09.892225 kubelet[3007]: I0413 20:19:09.892199 3007 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:10.799049 kubelet[3007]: E0413 20:19:10.799009 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:10.799539 kubelet[3007]: E0413 20:19:10.799309 3007 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:11.536023 kubelet[3007]: E0413 20:19:11.535979 3007 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Apr 13 20:19:11.648177 kubelet[3007]: I0413 20:19:11.647497 3007 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-28" Apr 13 20:19:11.649158 kubelet[3007]: E0413 20:19:11.648381 3007 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-28\": node \"ip-172-31-17-28\" not found" Apr 13 20:19:11.704648 kubelet[3007]: I0413 20:19:11.704604 3007 apiserver.go:52] "Watching apiserver" Apr 13 20:19:11.717218 kubelet[3007]: I0413 20:19:11.716187 3007 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:19:11.717647 kubelet[3007]: I0413 20:19:11.717192 3007 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:11.729634 kubelet[3007]: E0413 20:19:11.729382 3007 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:11.729634 kubelet[3007]: I0413 20:19:11.729420 3007 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:11.736544 kubelet[3007]: E0413 20:19:11.736280 3007 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:11.736544 kubelet[3007]: I0413 20:19:11.736322 3007 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Apr 13 20:19:11.741538 kubelet[3007]: E0413 20:19:11.741500 3007 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-28" Apr 13 20:19:13.531558 systemd[1]: Reloading requested from client PID 3289 ('systemctl') (unit session-7.scope)... Apr 13 20:19:13.531578 systemd[1]: Reloading... Apr 13 20:19:13.646179 zram_generator::config[3329]: No configuration found. Apr 13 20:19:13.786076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:19:13.881586 systemd[1]: Reloading finished in 349 ms. Apr 13 20:19:13.923517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:13.937835 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:19:13.938310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:13.951647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:14.201376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:14.216977 (kubelet)[3399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:19:14.287132 kubelet[3399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:19:14.287132 kubelet[3399]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:19:14.287132 kubelet[3399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:19:14.287679 kubelet[3399]: I0413 20:19:14.287326 3399 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:19:14.298279 kubelet[3399]: I0413 20:19:14.298241 3399 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:19:14.298279 kubelet[3399]: I0413 20:19:14.298269 3399 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:19:14.306265 kubelet[3399]: I0413 20:19:14.306227 3399 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:19:14.308304 kubelet[3399]: I0413 20:19:14.308007 3399 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:19:14.311194 kubelet[3399]: I0413 20:19:14.311164 3399 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:19:14.317978 update_engine[2087]: I20260413 20:19:14.317041 2087 update_attempter.cc:509] Updating boot flags... Apr 13 20:19:14.325167 kubelet[3399]: E0413 20:19:14.323556 3399 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:19:14.325167 kubelet[3399]: I0413 20:19:14.323596 3399 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:19:14.327157 kubelet[3399]: I0413 20:19:14.327114 3399 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:19:14.328169 kubelet[3399]: I0413 20:19:14.327944 3399 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:19:14.328768 kubelet[3399]: I0413 20:19:14.328005 3399 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329108 3399 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329131 3399 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329248 3399 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329450 3399 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329467 3399 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329513 3399 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:19:14.330158 kubelet[3399]: I0413 20:19:14.329533 3399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:19:14.333086 kubelet[3399]: I0413 20:19:14.333067 3399 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:19:14.333928 kubelet[3399]: I0413 20:19:14.333909 3399 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:19:14.340167 kubelet[3399]: I0413 20:19:14.339843 3399 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:19:14.340938 kubelet[3399]: I0413 20:19:14.340898 3399 server.go:1289] "Started kubelet" Apr 13 20:19:14.352727 kubelet[3399]: I0413 20:19:14.352696 3399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:19:14.361094 kubelet[3399]: I0413 20:19:14.361045 3399 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:19:14.366879 kubelet[3399]: I0413 20:19:14.366847 3399 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:19:14.375171 kubelet[3399]: I0413 20:19:14.373954 3399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:19:14.375543 kubelet[3399]: I0413 20:19:14.375523 3399 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:19:14.376795 kubelet[3399]: I0413 20:19:14.376756 3399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:19:14.379351 kubelet[3399]: I0413 20:19:14.379327 3399 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:19:14.379843 kubelet[3399]: E0413 20:19:14.379820 3399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-28\" not found" Apr 13 20:19:14.383529 kubelet[3399]: I0413 20:19:14.383510 3399 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:19:14.383884 kubelet[3399]: I0413 20:19:14.383867 3399 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:19:14.400758 kubelet[3399]: I0413 20:19:14.400671 3399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:19:14.416274 kubelet[3399]: I0413 20:19:14.416247 3399 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:19:14.416274 kubelet[3399]: I0413 20:19:14.416271 3399 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:19:14.416471 kubelet[3399]: I0413 20:19:14.416390 3399 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:19:14.464250 kubelet[3399]: I0413 20:19:14.463822 3399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:19:14.464250 kubelet[3399]: I0413 20:19:14.463851 3399 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:19:14.464250 kubelet[3399]: I0413 20:19:14.463876 3399 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:19:14.464250 kubelet[3399]: I0413 20:19:14.463884 3399 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:19:14.464250 kubelet[3399]: E0413 20:19:14.463932 3399 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:19:14.481205 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3433) Apr 13 20:19:14.568157 kubelet[3399]: E0413 20:19:14.565810 3399 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 20:19:14.678573 kubelet[3399]: I0413 20:19:14.678531 3399 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:19:14.678573 kubelet[3399]: I0413 20:19:14.678551 3399 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:19:14.678573 kubelet[3399]: I0413 20:19:14.678576 3399 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:19:14.678785 kubelet[3399]: I0413 20:19:14.678738 3399 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:19:14.678785 kubelet[3399]: I0413 20:19:14.678753 3399 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:19:14.678785 kubelet[3399]: I0413 20:19:14.678777 3399 policy_none.go:49] "None policy: Start" Apr 13 20:19:14.678906 kubelet[3399]: I0413 20:19:14.678792 3399 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:19:14.678906 kubelet[3399]: I0413 20:19:14.678805 3399 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:19:14.678985 kubelet[3399]: I0413 20:19:14.678938 3399 state_mem.go:75] "Updated machine memory state" Apr 13 20:19:14.686846 kubelet[3399]: E0413 20:19:14.686808 3399 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:19:14.687047 kubelet[3399]: I0413 20:19:14.687030 3399 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:19:14.687103 kubelet[3399]: I0413 20:19:14.687051 3399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:19:14.696245 kubelet[3399]: I0413 20:19:14.695760 3399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:19:14.714583 kubelet[3399]: E0413 20:19:14.714373 3399 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:19:14.773978 kubelet[3399]: I0413 20:19:14.773641 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Apr 13 20:19:14.781824 kubelet[3399]: I0413 20:19:14.780788 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:14.781824 kubelet[3399]: I0413 20:19:14.780847 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.782182 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3433) Apr 13 20:19:14.789055 kubelet[3399]: I0413 20:19:14.789013 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:14.789222 kubelet[3399]: I0413 20:19:14.789186 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.789356 kubelet[3399]: I0413 20:19:14.789217 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.789405 kubelet[3399]: I0413 20:19:14.789376 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.789517 kubelet[3399]: I0413 20:19:14.789501 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cfe737b4c94470ae4b8010d35ce2ffb-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-28\" (UID: \"3cfe737b4c94470ae4b8010d35ce2ffb\") " pod="kube-system/kube-scheduler-ip-172-31-17-28" Apr 13 20:19:14.789564 kubelet[3399]: I0413 20:19:14.789536 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-ca-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:14.789678 kubelet[3399]: I0413 20:19:14.789661 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71af294dce043cdafc8c5ad318791fc9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"71af294dce043cdafc8c5ad318791fc9\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Apr 13 20:19:14.789733 kubelet[3399]: I0413 20:19:14.789690 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.789839 kubelet[3399]: I0413 20:19:14.789822 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eee871cc9a80acc7e8d0b92c917d727-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"5eee871cc9a80acc7e8d0b92c917d727\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Apr 13 20:19:14.832758 kubelet[3399]: I0413 20:19:14.832723 3399 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Apr 13 20:19:14.848078 kubelet[3399]: I0413 20:19:14.848039 3399 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-28" Apr 13 20:19:14.848396 kubelet[3399]: I0413 20:19:14.848297 3399 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-28" Apr 13 20:19:15.346317 kubelet[3399]: I0413 20:19:15.346275 3399 apiserver.go:52] "Watching apiserver" Apr 13 20:19:15.385209 kubelet[3399]: I0413 20:19:15.384822 3399 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:19:15.496598 kubelet[3399]: I0413 20:19:15.496304 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-28" podStartSLOduration=1.496283895 podStartE2EDuration="1.496283895s" podCreationTimestamp="2026-04-13 20:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:15.483467967 +0000 UTC m=+1.255211118" watchObservedRunningTime="2026-04-13 20:19:15.496283895 +0000 UTC m=+1.268027050" Apr 13 20:19:15.509696 kubelet[3399]: I0413 20:19:15.508830 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-28" podStartSLOduration=1.508813269 podStartE2EDuration="1.508813269s" podCreationTimestamp="2026-04-13 20:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:15.497161866 +0000 UTC m=+1.268905021" watchObservedRunningTime="2026-04-13 20:19:15.508813269 +0000 UTC m=+1.280556428" Apr 13 20:19:15.509696 kubelet[3399]: I0413 20:19:15.509014 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-28" podStartSLOduration=1.509003589 podStartE2EDuration="1.509003589s" podCreationTimestamp="2026-04-13 20:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:15.50861344 +0000 UTC m=+1.280356598" watchObservedRunningTime="2026-04-13 20:19:15.509003589 +0000 UTC m=+1.280746752" Apr 13 20:19:20.451604 kubelet[3399]: I0413 20:19:20.451566 3399 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:19:20.452224 containerd[2111]: time="2026-04-13T20:19:20.451969505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:19:20.452644 kubelet[3399]: I0413 20:19:20.452374 3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:19:21.541053 kubelet[3399]: I0413 20:19:21.540856 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c-xtables-lock\") pod \"kube-proxy-qp8nr\" (UID: \"36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c\") " pod="kube-system/kube-proxy-qp8nr" Apr 13 20:19:21.541053 kubelet[3399]: I0413 20:19:21.541053 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtdm\" (UniqueName: \"kubernetes.io/projected/36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c-kube-api-access-dwtdm\") pod \"kube-proxy-qp8nr\" (UID: \"36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c\") " pod="kube-system/kube-proxy-qp8nr" Apr 13 20:19:21.541677 kubelet[3399]: I0413 20:19:21.541095 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c-kube-proxy\") pod \"kube-proxy-qp8nr\" (UID: \"36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c\") " pod="kube-system/kube-proxy-qp8nr" Apr 13 20:19:21.541677 kubelet[3399]: I0413 20:19:21.541115 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c-lib-modules\") pod \"kube-proxy-qp8nr\" (UID: \"36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c\") " pod="kube-system/kube-proxy-qp8nr" Apr 13 20:19:21.742950 kubelet[3399]: I0413 20:19:21.742901 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/873eb15c-e11e-428c-997c-0ed052daca76-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-cbdxd\" (UID: \"873eb15c-e11e-428c-997c-0ed052daca76\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cbdxd" Apr 13 20:19:21.743320 kubelet[3399]: I0413 20:19:21.742964 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6csq\" (UniqueName: \"kubernetes.io/projected/873eb15c-e11e-428c-997c-0ed052daca76-kube-api-access-b6csq\") pod \"tigera-operator-6bf85f8dd-cbdxd\" (UID: \"873eb15c-e11e-428c-997c-0ed052daca76\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cbdxd" Apr 13 20:19:21.799099 containerd[2111]: time="2026-04-13T20:19:21.798986661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qp8nr,Uid:36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:21.843598 containerd[2111]: time="2026-04-13T20:19:21.842983337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:21.844380 containerd[2111]: time="2026-04-13T20:19:21.843550285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:21.844537 containerd[2111]: time="2026-04-13T20:19:21.844473118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:21.844798 containerd[2111]: time="2026-04-13T20:19:21.844745171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:21.903583 containerd[2111]: time="2026-04-13T20:19:21.903462717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qp8nr,Uid:36b5753d-c5be-4a3d-a0da-cd98bbc3fb8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4304382007f3cb3981fe0ca29e3a3a85c6b0bb86d572a6f16b1a7af5b17c785c\"" Apr 13 20:19:21.911883 containerd[2111]: time="2026-04-13T20:19:21.911516687Z" level=info msg="CreateContainer within sandbox \"4304382007f3cb3981fe0ca29e3a3a85c6b0bb86d572a6f16b1a7af5b17c785c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:19:21.937504 containerd[2111]: time="2026-04-13T20:19:21.937456727Z" level=info msg="CreateContainer within sandbox \"4304382007f3cb3981fe0ca29e3a3a85c6b0bb86d572a6f16b1a7af5b17c785c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11b4663bef31b01f8fe40b248d2f0552562537e7b88ce23df516465a6a552191\"" Apr 13 20:19:21.939623 containerd[2111]: time="2026-04-13T20:19:21.938310104Z" level=info msg="StartContainer for \"11b4663bef31b01f8fe40b248d2f0552562537e7b88ce23df516465a6a552191\"" Apr 13 20:19:22.008446 containerd[2111]: time="2026-04-13T20:19:22.008399776Z" level=info msg="StartContainer for \"11b4663bef31b01f8fe40b248d2f0552562537e7b88ce23df516465a6a552191\" returns successfully" Apr 13 20:19:22.028093 containerd[2111]: time="2026-04-13T20:19:22.028047783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cbdxd,Uid:873eb15c-e11e-428c-997c-0ed052daca76,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:19:22.067659 containerd[2111]: time="2026-04-13T20:19:22.067233034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:22.067659 containerd[2111]: time="2026-04-13T20:19:22.067278559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:22.067659 containerd[2111]: time="2026-04-13T20:19:22.067289985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:22.067659 containerd[2111]: time="2026-04-13T20:19:22.067380421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:22.140477 containerd[2111]: time="2026-04-13T20:19:22.140433811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cbdxd,Uid:873eb15c-e11e-428c-997c-0ed052daca76,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8dd900dad073b757e513d30f05c8e6c7efbd54b8d2f9d58f59105e1ae3a372ab\"" Apr 13 20:19:22.143265 containerd[2111]: time="2026-04-13T20:19:22.142617263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:19:22.559848 kubelet[3399]: I0413 20:19:22.558118 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qp8nr" podStartSLOduration=1.558097435 podStartE2EDuration="1.558097435s" podCreationTimestamp="2026-04-13 20:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:19:22.558092135 +0000 UTC m=+8.329835296" watchObservedRunningTime="2026-04-13 20:19:22.558097435 +0000 UTC m=+8.329840597" Apr 13 20:19:23.370722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718276418.mount: Deactivated successfully. Apr 13 20:19:27.339571 containerd[2111]: time="2026-04-13T20:19:27.339511227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:27.341590 containerd[2111]: time="2026-04-13T20:19:27.341490385Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:19:27.343977 containerd[2111]: time="2026-04-13T20:19:27.343936115Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:27.348219 containerd[2111]: time="2026-04-13T20:19:27.348120864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:27.348219 containerd[2111]: time="2026-04-13T20:19:27.348835534Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.206171978s" Apr 13 20:19:27.348219 containerd[2111]: time="2026-04-13T20:19:27.348878638Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:19:27.356427 containerd[2111]: time="2026-04-13T20:19:27.356387072Z" level=info msg="CreateContainer within sandbox \"8dd900dad073b757e513d30f05c8e6c7efbd54b8d2f9d58f59105e1ae3a372ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:19:27.380782 containerd[2111]: time="2026-04-13T20:19:27.380720612Z" level=info msg="CreateContainer within sandbox \"8dd900dad073b757e513d30f05c8e6c7efbd54b8d2f9d58f59105e1ae3a372ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310\"" Apr 13 20:19:27.381739 containerd[2111]: time="2026-04-13T20:19:27.381694700Z" level=info msg="StartContainer for \"8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310\"" Apr 13 20:19:27.422052 systemd[1]: run-containerd-runc-k8s.io-8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310-runc.2hCSHT.mount: Deactivated successfully. Apr 13 20:19:27.456091 containerd[2111]: time="2026-04-13T20:19:27.456023931Z" level=info msg="StartContainer for \"8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310\" returns successfully" Apr 13 20:19:27.571941 kubelet[3399]: I0413 20:19:27.571855 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-cbdxd" podStartSLOduration=1.3635834820000001 podStartE2EDuration="6.57183489s" podCreationTimestamp="2026-04-13 20:19:21 +0000 UTC" firstStartedPulling="2026-04-13 20:19:22.142047165 +0000 UTC m=+7.913790318" lastFinishedPulling="2026-04-13 20:19:27.350298586 +0000 UTC m=+13.122041726" observedRunningTime="2026-04-13 20:19:27.571392869 +0000 UTC m=+13.343136029" watchObservedRunningTime="2026-04-13 20:19:27.57183489 +0000 UTC m=+13.343578053" Apr 13 20:19:34.653479 sudo[2468]: pam_unix(sudo:session): session closed for user root Apr 13 20:19:34.815456 sshd[2464]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:34.822613 systemd[1]: sshd@6-172.31.17.28:22-50.85.169.122:50656.service: Deactivated successfully. Apr 13 20:19:34.831420 systemd-logind[2083]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:19:34.832532 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:19:34.847386 systemd-logind[2083]: Removed session 7. Apr 13 20:19:36.556903 kubelet[3399]: I0413 20:19:36.556856 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64e126ed-e4fa-4d4b-8988-213d539b3ca8-tigera-ca-bundle\") pod \"calico-typha-7848c67957-f6dsz\" (UID: \"64e126ed-e4fa-4d4b-8988-213d539b3ca8\") " pod="calico-system/calico-typha-7848c67957-f6dsz" Apr 13 20:19:36.556903 kubelet[3399]: I0413 20:19:36.556910 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjdx\" (UniqueName: \"kubernetes.io/projected/64e126ed-e4fa-4d4b-8988-213d539b3ca8-kube-api-access-rqjdx\") pod \"calico-typha-7848c67957-f6dsz\" (UID: \"64e126ed-e4fa-4d4b-8988-213d539b3ca8\") " pod="calico-system/calico-typha-7848c67957-f6dsz" Apr 13 20:19:36.557531 kubelet[3399]: I0413 20:19:36.556944 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/64e126ed-e4fa-4d4b-8988-213d539b3ca8-typha-certs\") pod \"calico-typha-7848c67957-f6dsz\" (UID: \"64e126ed-e4fa-4d4b-8988-213d539b3ca8\") " pod="calico-system/calico-typha-7848c67957-f6dsz" Apr 13 20:19:36.759370 kubelet[3399]: I0413 20:19:36.759326 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-cni-log-dir\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759370 kubelet[3399]: I0413 20:19:36.759382 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-var-lib-calico\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759605 kubelet[3399]: I0413 20:19:36.759408 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8kh\" (UniqueName: \"kubernetes.io/projected/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-kube-api-access-fd8kh\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759605 kubelet[3399]: I0413 20:19:36.759436 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-cni-bin-dir\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759605 kubelet[3399]: I0413 20:19:36.759455 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-lib-modules\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759605 kubelet[3399]: I0413 20:19:36.759474 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-xtables-lock\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759605 kubelet[3399]: I0413 20:19:36.759495 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-nodeproc\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759819 kubelet[3399]: I0413 20:19:36.759515 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-policysync\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759819 kubelet[3399]: I0413 20:19:36.759538 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-node-certs\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759819 kubelet[3399]: I0413 20:19:36.759562 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-var-run-calico\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759819 kubelet[3399]: I0413 20:19:36.759586 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-cni-net-dir\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.759819 kubelet[3399]: I0413 20:19:36.759610 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-flexvol-driver-host\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.760005 kubelet[3399]: I0413 20:19:36.759635 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-sys-fs\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.760005 kubelet[3399]: I0413 20:19:36.759660 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-tigera-ca-bundle\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.760005 kubelet[3399]: I0413 20:19:36.759691 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b-bpffs\") pod \"calico-node-497b7\" (UID: \"05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b\") " pod="calico-system/calico-node-497b7" Apr 13 20:19:36.763502 containerd[2111]: time="2026-04-13T20:19:36.763450954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7848c67957-f6dsz,Uid:64e126ed-e4fa-4d4b-8988-213d539b3ca8,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:36.775292 kubelet[3399]: E0413 20:19:36.774955 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:36.854279 containerd[2111]: time="2026-04-13T20:19:36.853446123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:36.854279 containerd[2111]: time="2026-04-13T20:19:36.853541589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:36.857010 containerd[2111]: time="2026-04-13T20:19:36.854349248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:36.857010 containerd[2111]: time="2026-04-13T20:19:36.854565170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:36.867344 kubelet[3399]: I0413 20:19:36.867193 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6a5c10c-7432-4751-8918-4251f504fa44-kubelet-dir\") pod \"csi-node-driver-bs499\" (UID: \"b6a5c10c-7432-4751-8918-4251f504fa44\") " pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:36.867344 kubelet[3399]: I0413 20:19:36.867265 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldd8\" (UniqueName: \"kubernetes.io/projected/b6a5c10c-7432-4751-8918-4251f504fa44-kube-api-access-vldd8\") pod \"csi-node-driver-bs499\" (UID: \"b6a5c10c-7432-4751-8918-4251f504fa44\") " pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:36.867527 kubelet[3399]: I0413 20:19:36.867367 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6a5c10c-7432-4751-8918-4251f504fa44-registration-dir\") pod \"csi-node-driver-bs499\" (UID: \"b6a5c10c-7432-4751-8918-4251f504fa44\") " pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:36.867527 kubelet[3399]: I0413 20:19:36.867393 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6a5c10c-7432-4751-8918-4251f504fa44-socket-dir\") pod \"csi-node-driver-bs499\" (UID: \"b6a5c10c-7432-4751-8918-4251f504fa44\") " pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:36.867527 kubelet[3399]: I0413 20:19:36.867457 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6a5c10c-7432-4751-8918-4251f504fa44-varrun\") pod \"csi-node-driver-bs499\" (UID: \"b6a5c10c-7432-4751-8918-4251f504fa44\") " pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:36.883659 kubelet[3399]: E0413 20:19:36.881914 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.883659 kubelet[3399]: W0413 20:19:36.882659 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.883659 kubelet[3399]: E0413 20:19:36.882695 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.887156 kubelet[3399]: E0413 20:19:36.886980 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.887156 kubelet[3399]: W0413 20:19:36.887007 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.887156 kubelet[3399]: E0413 20:19:36.887031 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.910342 kubelet[3399]: E0413 20:19:36.909293 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.910342 kubelet[3399]: W0413 20:19:36.909318 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.910342 kubelet[3399]: E0413 20:19:36.909343 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.916885 kubelet[3399]: E0413 20:19:36.916774 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.916885 kubelet[3399]: W0413 20:19:36.916796 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.916885 kubelet[3399]: E0413 20:19:36.916820 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.968016 kubelet[3399]: E0413 20:19:36.967985 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.968016 kubelet[3399]: W0413 20:19:36.968006 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.968233 kubelet[3399]: E0413 20:19:36.968029 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.968480 kubelet[3399]: E0413 20:19:36.968456 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.968480 kubelet[3399]: W0413 20:19:36.968471 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.968611 kubelet[3399]: E0413 20:19:36.968486 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.968813 kubelet[3399]: E0413 20:19:36.968791 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.968813 kubelet[3399]: W0413 20:19:36.968807 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.968920 kubelet[3399]: E0413 20:19:36.968821 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.969242 kubelet[3399]: E0413 20:19:36.969213 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.969242 kubelet[3399]: W0413 20:19:36.969235 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.969366 kubelet[3399]: E0413 20:19:36.969249 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.969837 kubelet[3399]: E0413 20:19:36.969554 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.969837 kubelet[3399]: W0413 20:19:36.969567 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.969837 kubelet[3399]: E0413 20:19:36.969579 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.969988 kubelet[3399]: E0413 20:19:36.969882 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.969988 kubelet[3399]: W0413 20:19:36.969892 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.969988 kubelet[3399]: E0413 20:19:36.969903 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970244 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971088 kubelet[3399]: W0413 20:19:36.970257 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970282 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970563 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971088 kubelet[3399]: W0413 20:19:36.970572 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970583 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970859 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971088 kubelet[3399]: W0413 20:19:36.970868 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971088 kubelet[3399]: E0413 20:19:36.970879 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971210 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971917 kubelet[3399]: W0413 20:19:36.971221 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971234 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971466 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971917 kubelet[3399]: W0413 20:19:36.971477 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971500 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971813 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.971917 kubelet[3399]: W0413 20:19:36.971824 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.971917 kubelet[3399]: E0413 20:19:36.971838 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.972324 kubelet[3399]: E0413 20:19:36.972193 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.972324 kubelet[3399]: W0413 20:19:36.972218 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.972324 kubelet[3399]: E0413 20:19:36.972232 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.972510 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.975653 kubelet[3399]: W0413 20:19:36.972523 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.972535 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.972772 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.975653 kubelet[3399]: W0413 20:19:36.972782 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.972793 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.973072 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.975653 kubelet[3399]: W0413 20:19:36.973081 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.973093 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.975653 kubelet[3399]: E0413 20:19:36.973428 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976165 containerd[2111]: time="2026-04-13T20:19:36.974765693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-497b7,Uid:05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:36.976237 kubelet[3399]: W0413 20:19:36.973439 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.973451 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.973680 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976237 kubelet[3399]: W0413 20:19:36.973689 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.973700 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.974416 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976237 kubelet[3399]: W0413 20:19:36.974426 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.974439 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976237 kubelet[3399]: E0413 20:19:36.975312 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976237 kubelet[3399]: W0413 20:19:36.975324 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.975337 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.975567 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976634 kubelet[3399]: W0413 20:19:36.975576 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.975586 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.975929 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976634 kubelet[3399]: W0413 20:19:36.975940 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.975952 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.976392 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.976634 kubelet[3399]: W0413 20:19:36.976402 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.976634 kubelet[3399]: E0413 20:19:36.976416 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.977086 kubelet[3399]: E0413 20:19:36.976783 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.977086 kubelet[3399]: W0413 20:19:36.976794 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.977086 kubelet[3399]: E0413 20:19:36.976807 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.977428 kubelet[3399]: E0413 20:19:36.977173 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.977428 kubelet[3399]: W0413 20:19:36.977184 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.977428 kubelet[3399]: E0413 20:19:36.977197 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.992725 kubelet[3399]: E0413 20:19:36.992615 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:36.992725 kubelet[3399]: W0413 20:19:36.992637 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:36.992725 kubelet[3399]: E0413 20:19:36.992670 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:36.999555 containerd[2111]: time="2026-04-13T20:19:36.999501034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7848c67957-f6dsz,Uid:64e126ed-e4fa-4d4b-8988-213d539b3ca8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dee9df450b7a24afa5c642af4cb99f61dda8616f3f374464f0f1f86d129524e\"" Apr 13 20:19:37.003192 containerd[2111]: time="2026-04-13T20:19:37.002374060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:19:37.021096 containerd[2111]: time="2026-04-13T20:19:37.020725897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:37.021096 containerd[2111]: time="2026-04-13T20:19:37.020812263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:37.021096 containerd[2111]: time="2026-04-13T20:19:37.020835942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:37.022043 containerd[2111]: time="2026-04-13T20:19:37.021424592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:37.063962 containerd[2111]: time="2026-04-13T20:19:37.063894044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-497b7,Uid:05e2e2cb-8bc4-42fc-91e2-47d0bd42d23b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\"" Apr 13 20:19:38.465382 kubelet[3399]: E0413 20:19:38.465103 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:38.516536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76690181.mount: Deactivated successfully. Apr 13 20:19:39.355288 containerd[2111]: time="2026-04-13T20:19:39.355228727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:39.356887 containerd[2111]: time="2026-04-13T20:19:39.356729938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:19:39.358930 containerd[2111]: time="2026-04-13T20:19:39.358707968Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:39.362178 containerd[2111]: time="2026-04-13T20:19:39.362088800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:39.363125 containerd[2111]: time="2026-04-13T20:19:39.362941806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.360457627s" Apr 13 20:19:39.363125 containerd[2111]: time="2026-04-13T20:19:39.362984534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:19:39.364703 containerd[2111]: time="2026-04-13T20:19:39.364513130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:19:39.393470 containerd[2111]: time="2026-04-13T20:19:39.393428854Z" level=info msg="CreateContainer within sandbox \"8dee9df450b7a24afa5c642af4cb99f61dda8616f3f374464f0f1f86d129524e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:19:39.421121 containerd[2111]: time="2026-04-13T20:19:39.421066196Z" level=info msg="CreateContainer within sandbox \"8dee9df450b7a24afa5c642af4cb99f61dda8616f3f374464f0f1f86d129524e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5f66089ef853620bb7061006dd6829488d7d3a079321b44e065097cc2db14174\"" Apr 13 20:19:39.421970 containerd[2111]: time="2026-04-13T20:19:39.421718127Z" level=info msg="StartContainer for \"5f66089ef853620bb7061006dd6829488d7d3a079321b44e065097cc2db14174\"" Apr 13 20:19:39.507199 containerd[2111]: time="2026-04-13T20:19:39.507004948Z" level=info msg="StartContainer for \"5f66089ef853620bb7061006dd6829488d7d3a079321b44e065097cc2db14174\" returns successfully" Apr 13 20:19:39.648254 kubelet[3399]: E0413 20:19:39.648190 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.648254 kubelet[3399]: W0413 20:19:39.648217 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.649475 kubelet[3399]: E0413 20:19:39.648832 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.649475 kubelet[3399]: E0413 20:19:39.649129 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.649475 kubelet[3399]: W0413 20:19:39.649247 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.649475 kubelet[3399]: E0413 20:19:39.649270 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.650037 kubelet[3399]: E0413 20:19:39.649644 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.650037 kubelet[3399]: W0413 20:19:39.649752 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.650037 kubelet[3399]: E0413 20:19:39.649771 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.650645 kubelet[3399]: E0413 20:19:39.650418 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.650645 kubelet[3399]: W0413 20:19:39.650432 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.650645 kubelet[3399]: E0413 20:19:39.650448 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.651439 kubelet[3399]: E0413 20:19:39.651048 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.651439 kubelet[3399]: W0413 20:19:39.651061 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.651439 kubelet[3399]: E0413 20:19:39.651076 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.651967 kubelet[3399]: E0413 20:19:39.651752 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.651967 kubelet[3399]: W0413 20:19:39.651767 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.651967 kubelet[3399]: E0413 20:19:39.651781 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.652912 kubelet[3399]: E0413 20:19:39.652633 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.652912 kubelet[3399]: W0413 20:19:39.652647 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.652912 kubelet[3399]: E0413 20:19:39.652660 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.653482 kubelet[3399]: E0413 20:19:39.653338 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.653482 kubelet[3399]: W0413 20:19:39.653352 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.653482 kubelet[3399]: E0413 20:19:39.653365 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.653883 kubelet[3399]: E0413 20:19:39.653774 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.653883 kubelet[3399]: W0413 20:19:39.653786 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.653883 kubelet[3399]: E0413 20:19:39.653799 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.654803 kubelet[3399]: E0413 20:19:39.654603 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.654803 kubelet[3399]: W0413 20:19:39.654616 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.654803 kubelet[3399]: E0413 20:19:39.654629 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.655173 kubelet[3399]: E0413 20:19:39.655022 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.655173 kubelet[3399]: W0413 20:19:39.655035 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.655173 kubelet[3399]: E0413 20:19:39.655047 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.655980 kubelet[3399]: E0413 20:19:39.655833 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.655980 kubelet[3399]: W0413 20:19:39.655846 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.655980 kubelet[3399]: E0413 20:19:39.655860 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.656709 kubelet[3399]: E0413 20:19:39.656631 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.656709 kubelet[3399]: W0413 20:19:39.656645 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.656709 kubelet[3399]: E0413 20:19:39.656658 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.657407 kubelet[3399]: E0413 20:19:39.657197 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.657407 kubelet[3399]: W0413 20:19:39.657210 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.657407 kubelet[3399]: E0413 20:19:39.657225 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.657803 kubelet[3399]: E0413 20:19:39.657709 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.657803 kubelet[3399]: W0413 20:19:39.657723 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.657803 kubelet[3399]: E0413 20:19:39.657735 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.690375 kubelet[3399]: E0413 20:19:39.690305 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.690375 kubelet[3399]: W0413 20:19:39.690326 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.690375 kubelet[3399]: E0413 20:19:39.690347 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.691634 kubelet[3399]: E0413 20:19:39.691296 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.691634 kubelet[3399]: W0413 20:19:39.691316 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.691634 kubelet[3399]: E0413 20:19:39.691432 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.692424 kubelet[3399]: E0413 20:19:39.692220 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.692424 kubelet[3399]: W0413 20:19:39.692234 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.692424 kubelet[3399]: E0413 20:19:39.692250 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.693404 kubelet[3399]: E0413 20:19:39.693091 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.693404 kubelet[3399]: W0413 20:19:39.693105 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.693404 kubelet[3399]: E0413 20:19:39.693119 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.694150 kubelet[3399]: E0413 20:19:39.693893 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.694150 kubelet[3399]: W0413 20:19:39.693907 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.694150 kubelet[3399]: E0413 20:19:39.693921 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.694888 kubelet[3399]: E0413 20:19:39.694582 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.694888 kubelet[3399]: W0413 20:19:39.694596 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.694888 kubelet[3399]: E0413 20:19:39.694609 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.695784 kubelet[3399]: E0413 20:19:39.695516 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.695784 kubelet[3399]: W0413 20:19:39.695554 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.695784 kubelet[3399]: E0413 20:19:39.695568 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.697274 kubelet[3399]: E0413 20:19:39.696254 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.697274 kubelet[3399]: W0413 20:19:39.696268 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.697274 kubelet[3399]: E0413 20:19:39.696377 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.697688 kubelet[3399]: E0413 20:19:39.697452 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.697688 kubelet[3399]: W0413 20:19:39.697464 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.697688 kubelet[3399]: E0413 20:19:39.697477 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.698261 kubelet[3399]: E0413 20:19:39.698101 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.698261 kubelet[3399]: W0413 20:19:39.698124 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.698261 kubelet[3399]: E0413 20:19:39.698147 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.698754 kubelet[3399]: E0413 20:19:39.698713 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.698754 kubelet[3399]: W0413 20:19:39.698727 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.698754 kubelet[3399]: E0413 20:19:39.698740 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.699582 kubelet[3399]: E0413 20:19:39.699517 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.699582 kubelet[3399]: W0413 20:19:39.699529 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.699582 kubelet[3399]: E0413 20:19:39.699543 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.700703 kubelet[3399]: E0413 20:19:39.700274 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.700703 kubelet[3399]: W0413 20:19:39.700296 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.700703 kubelet[3399]: E0413 20:19:39.700309 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.701477 kubelet[3399]: E0413 20:19:39.701464 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.703084 kubelet[3399]: W0413 20:19:39.701592 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.703084 kubelet[3399]: E0413 20:19:39.701610 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.703084 kubelet[3399]: E0413 20:19:39.702575 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.703084 kubelet[3399]: W0413 20:19:39.702587 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.703084 kubelet[3399]: E0413 20:19:39.702601 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.704064 kubelet[3399]: E0413 20:19:39.704040 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.704178 kubelet[3399]: W0413 20:19:39.704165 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.704464 kubelet[3399]: E0413 20:19:39.704327 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.704932 kubelet[3399]: E0413 20:19:39.704805 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.705295 kubelet[3399]: W0413 20:19:39.705017 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.705295 kubelet[3399]: E0413 20:19:39.705036 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:39.705744 kubelet[3399]: E0413 20:19:39.705576 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:39.705834 kubelet[3399]: W0413 20:19:39.705820 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:39.705931 kubelet[3399]: E0413 20:19:39.705899 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.375352 systemd[1]: run-containerd-runc-k8s.io-5f66089ef853620bb7061006dd6829488d7d3a079321b44e065097cc2db14174-runc.9wLXKZ.mount: Deactivated successfully. Apr 13 20:19:40.465414 kubelet[3399]: E0413 20:19:40.465351 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:40.613775 kubelet[3399]: I0413 20:19:40.613747 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:40.664075 kubelet[3399]: E0413 20:19:40.664042 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.664075 kubelet[3399]: W0413 20:19:40.664074 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.664692 kubelet[3399]: E0413 20:19:40.664108 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.664692 kubelet[3399]: E0413 20:19:40.664504 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.664692 kubelet[3399]: W0413 20:19:40.664591 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.664692 kubelet[3399]: E0413 20:19:40.664612 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.665269 kubelet[3399]: E0413 20:19:40.665171 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.665269 kubelet[3399]: W0413 20:19:40.665187 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.665269 kubelet[3399]: E0413 20:19:40.665204 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.665949 kubelet[3399]: E0413 20:19:40.665535 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.665949 kubelet[3399]: W0413 20:19:40.665556 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.665949 kubelet[3399]: E0413 20:19:40.665572 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.666274 kubelet[3399]: E0413 20:19:40.666023 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.666274 kubelet[3399]: W0413 20:19:40.666037 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.666274 kubelet[3399]: E0413 20:19:40.666051 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.666717 kubelet[3399]: E0413 20:19:40.666638 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.666880 kubelet[3399]: W0413 20:19:40.666650 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.666880 kubelet[3399]: E0413 20:19:40.666764 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.667391 kubelet[3399]: E0413 20:19:40.667242 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.667391 kubelet[3399]: W0413 20:19:40.667265 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.667391 kubelet[3399]: E0413 20:19:40.667280 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.667908 kubelet[3399]: E0413 20:19:40.667784 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.667908 kubelet[3399]: W0413 20:19:40.667798 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.667908 kubelet[3399]: E0413 20:19:40.667833 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.668661 kubelet[3399]: E0413 20:19:40.668409 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.668661 kubelet[3399]: W0413 20:19:40.668422 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.668661 kubelet[3399]: E0413 20:19:40.668435 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.669146 kubelet[3399]: E0413 20:19:40.668882 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.669146 kubelet[3399]: W0413 20:19:40.668896 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.669146 kubelet[3399]: E0413 20:19:40.668909 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.669595 kubelet[3399]: E0413 20:19:40.669358 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.669595 kubelet[3399]: W0413 20:19:40.669371 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.669595 kubelet[3399]: E0413 20:19:40.669385 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.670053 kubelet[3399]: E0413 20:19:40.669821 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.670053 kubelet[3399]: W0413 20:19:40.669835 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.670053 kubelet[3399]: E0413 20:19:40.669857 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.670522 kubelet[3399]: E0413 20:19:40.670303 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.670522 kubelet[3399]: W0413 20:19:40.670317 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.670522 kubelet[3399]: E0413 20:19:40.670340 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.670948 kubelet[3399]: E0413 20:19:40.670730 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.670948 kubelet[3399]: W0413 20:19:40.670743 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.670948 kubelet[3399]: E0413 20:19:40.670765 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.671307 kubelet[3399]: E0413 20:19:40.671195 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.671307 kubelet[3399]: W0413 20:19:40.671209 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.671307 kubelet[3399]: E0413 20:19:40.671232 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.700856 kubelet[3399]: E0413 20:19:40.700826 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.701723 kubelet[3399]: W0413 20:19:40.701182 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.701723 kubelet[3399]: E0413 20:19:40.701218 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.702029 kubelet[3399]: E0413 20:19:40.701974 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.702029 kubelet[3399]: W0413 20:19:40.701989 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.702029 kubelet[3399]: E0413 20:19:40.702004 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.702952 kubelet[3399]: E0413 20:19:40.702838 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.702952 kubelet[3399]: W0413 20:19:40.702857 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.702952 kubelet[3399]: E0413 20:19:40.702872 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.704215 kubelet[3399]: E0413 20:19:40.703819 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.704215 kubelet[3399]: W0413 20:19:40.703834 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.704215 kubelet[3399]: E0413 20:19:40.703847 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.704215 kubelet[3399]: E0413 20:19:40.704115 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.704215 kubelet[3399]: W0413 20:19:40.704124 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.704215 kubelet[3399]: E0413 20:19:40.704135 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.705716 kubelet[3399]: E0413 20:19:40.705492 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.705716 kubelet[3399]: W0413 20:19:40.705506 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.705716 kubelet[3399]: E0413 20:19:40.705519 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.706165 kubelet[3399]: E0413 20:19:40.705960 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.706165 kubelet[3399]: W0413 20:19:40.705976 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.706165 kubelet[3399]: E0413 20:19:40.705990 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.706342 kubelet[3399]: E0413 20:19:40.706300 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.706342 kubelet[3399]: W0413 20:19:40.706311 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.706342 kubelet[3399]: E0413 20:19:40.706324 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.707613 kubelet[3399]: E0413 20:19:40.707269 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.707613 kubelet[3399]: W0413 20:19:40.707283 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.707613 kubelet[3399]: E0413 20:19:40.707297 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.707613 kubelet[3399]: E0413 20:19:40.707527 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.707613 kubelet[3399]: W0413 20:19:40.707536 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.707613 kubelet[3399]: E0413 20:19:40.707547 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.707937 kubelet[3399]: E0413 20:19:40.707783 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.707937 kubelet[3399]: W0413 20:19:40.707792 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.707937 kubelet[3399]: E0413 20:19:40.707804 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.708101 kubelet[3399]: E0413 20:19:40.708002 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.708101 kubelet[3399]: W0413 20:19:40.708011 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.708101 kubelet[3399]: E0413 20:19:40.708021 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.708762 kubelet[3399]: E0413 20:19:40.708231 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.708762 kubelet[3399]: W0413 20:19:40.708241 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.708762 kubelet[3399]: E0413 20:19:40.708255 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.708762 kubelet[3399]: E0413 20:19:40.708478 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.708762 kubelet[3399]: W0413 20:19:40.708487 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.708762 kubelet[3399]: E0413 20:19:40.708500 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.709425 kubelet[3399]: E0413 20:19:40.709407 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.709425 kubelet[3399]: W0413 20:19:40.709421 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.709553 kubelet[3399]: E0413 20:19:40.709436 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.709852 kubelet[3399]: E0413 20:19:40.709644 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.709852 kubelet[3399]: W0413 20:19:40.709655 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.709852 kubelet[3399]: E0413 20:19:40.709669 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.710033 kubelet[3399]: E0413 20:19:40.709891 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.710033 kubelet[3399]: W0413 20:19:40.709901 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.710033 kubelet[3399]: E0413 20:19:40.709913 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.710826 kubelet[3399]: E0413 20:19:40.710330 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:19:40.710826 kubelet[3399]: W0413 20:19:40.710343 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:19:40.710826 kubelet[3399]: E0413 20:19:40.710356 3399 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:19:40.739452 containerd[2111]: time="2026-04-13T20:19:40.739385300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:40.745065 containerd[2111]: time="2026-04-13T20:19:40.744308390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:19:40.748353 containerd[2111]: time="2026-04-13T20:19:40.748312451Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:40.752225 containerd[2111]: time="2026-04-13T20:19:40.751887363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:40.754484 containerd[2111]: time="2026-04-13T20:19:40.754446442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.389895433s" Apr 13 20:19:40.754721 containerd[2111]: time="2026-04-13T20:19:40.754606679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:19:40.761454 containerd[2111]: time="2026-04-13T20:19:40.761415094Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:19:40.787432 containerd[2111]: time="2026-04-13T20:19:40.787386146Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452\"" Apr 13 20:19:40.788495 containerd[2111]: time="2026-04-13T20:19:40.788222362Z" level=info msg="StartContainer for \"e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452\"" Apr 13 20:19:40.827488 systemd[1]: run-containerd-runc-k8s.io-e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452-runc.A53rfS.mount: Deactivated successfully. Apr 13 20:19:40.867317 containerd[2111]: time="2026-04-13T20:19:40.867265846Z" level=info msg="StartContainer for \"e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452\" returns successfully" Apr 13 20:19:41.000682 containerd[2111]: time="2026-04-13T20:19:40.967257564Z" level=info msg="shim disconnected" id=e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452 namespace=k8s.io Apr 13 20:19:41.000682 containerd[2111]: time="2026-04-13T20:19:41.000394204Z" level=warning msg="cleaning up after shim disconnected" id=e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452 namespace=k8s.io Apr 13 20:19:41.000682 containerd[2111]: time="2026-04-13T20:19:41.000417252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:19:41.022056 containerd[2111]: time="2026-04-13T20:19:41.022011790Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:19:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:19:41.375298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e192a494baf8d566a27e2e218793bf874d4732a88e69c7be90376fccc812a452-rootfs.mount: Deactivated successfully. Apr 13 20:19:41.618456 containerd[2111]: time="2026-04-13T20:19:41.618086877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:19:41.645909 kubelet[3399]: I0413 20:19:41.639583 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7848c67957-f6dsz" podStartSLOduration=3.274944365 podStartE2EDuration="5.638115786s" podCreationTimestamp="2026-04-13 20:19:36 +0000 UTC" firstStartedPulling="2026-04-13 20:19:37.000895467 +0000 UTC m=+22.772638607" lastFinishedPulling="2026-04-13 20:19:39.364066886 +0000 UTC m=+25.135810028" observedRunningTime="2026-04-13 20:19:39.622619682 +0000 UTC m=+25.394362844" watchObservedRunningTime="2026-04-13 20:19:41.638115786 +0000 UTC m=+27.409858949" Apr 13 20:19:42.467444 kubelet[3399]: E0413 20:19:42.467403 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:44.466332 kubelet[3399]: E0413 20:19:44.465911 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:46.466542 kubelet[3399]: E0413 20:19:46.465379 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:47.965753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689736351.mount: Deactivated successfully. Apr 13 20:19:48.026039 containerd[2111]: time="2026-04-13T20:19:48.025905050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:48.028865 containerd[2111]: time="2026-04-13T20:19:48.028795101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:19:48.032182 containerd[2111]: time="2026-04-13T20:19:48.032070340Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:48.037380 containerd[2111]: time="2026-04-13T20:19:48.037316215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:48.038702 containerd[2111]: time="2026-04-13T20:19:48.038032570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.419799816s" Apr 13 20:19:48.038702 containerd[2111]: time="2026-04-13T20:19:48.038077818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:19:48.234723 containerd[2111]: time="2026-04-13T20:19:48.234609631Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:19:48.296892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751080690.mount: Deactivated successfully. Apr 13 20:19:48.302005 containerd[2111]: time="2026-04-13T20:19:48.301949069Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1\"" Apr 13 20:19:48.313214 containerd[2111]: time="2026-04-13T20:19:48.312332027Z" level=info msg="StartContainer for \"88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1\"" Apr 13 20:19:48.400063 containerd[2111]: time="2026-04-13T20:19:48.400021009Z" level=info msg="StartContainer for \"88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1\" returns successfully" Apr 13 20:19:48.465953 kubelet[3399]: E0413 20:19:48.465740 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:48.709645 containerd[2111]: time="2026-04-13T20:19:48.709561911Z" level=info msg="shim disconnected" id=88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1 namespace=k8s.io Apr 13 20:19:48.709645 containerd[2111]: time="2026-04-13T20:19:48.709629650Z" level=warning msg="cleaning up after shim disconnected" id=88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1 namespace=k8s.io Apr 13 20:19:48.709645 containerd[2111]: time="2026-04-13T20:19:48.709644412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:19:48.966259 systemd[1]: run-containerd-runc-k8s.io-88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1-runc.0zFE8w.mount: Deactivated successfully. Apr 13 20:19:48.966435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88fb8a77e2bc5048d804d8747adc1176eb46e88cf8fd2ca35c8d103f86b3f2f1-rootfs.mount: Deactivated successfully. Apr 13 20:19:49.137448 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:19:49.139357 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:19:49.137516 systemd-resolved[1989]: Flushed all caches. Apr 13 20:19:49.713166 containerd[2111]: time="2026-04-13T20:19:49.711382741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:19:50.466761 kubelet[3399]: E0413 20:19:50.466710 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:51.185280 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:19:51.185331 systemd-resolved[1989]: Flushed all caches. Apr 13 20:19:51.189258 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:19:51.401404 kubelet[3399]: I0413 20:19:51.400790 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:19:52.465621 kubelet[3399]: E0413 20:19:52.465579 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:53.142510 containerd[2111]: time="2026-04-13T20:19:53.142454142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:53.144555 containerd[2111]: time="2026-04-13T20:19:53.144353503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:19:53.146748 containerd[2111]: time="2026-04-13T20:19:53.146670033Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:53.150415 containerd[2111]: time="2026-04-13T20:19:53.150349177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:19:53.151485 containerd[2111]: time="2026-04-13T20:19:53.151324080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.439870576s" Apr 13 20:19:53.151485 containerd[2111]: time="2026-04-13T20:19:53.151366152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:19:53.158248 containerd[2111]: time="2026-04-13T20:19:53.158209355Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:19:53.185627 containerd[2111]: time="2026-04-13T20:19:53.185575130Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b\"" Apr 13 20:19:53.187216 containerd[2111]: time="2026-04-13T20:19:53.187178132Z" level=info msg="StartContainer for \"38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b\"" Apr 13 20:19:53.262249 containerd[2111]: time="2026-04-13T20:19:53.262201852Z" level=info msg="StartContainer for \"38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b\" returns successfully" Apr 13 20:19:54.313671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b-rootfs.mount: Deactivated successfully. Apr 13 20:19:54.319932 containerd[2111]: time="2026-04-13T20:19:54.319270702Z" level=info msg="shim disconnected" id=38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b namespace=k8s.io Apr 13 20:19:54.319932 containerd[2111]: time="2026-04-13T20:19:54.319345678Z" level=warning msg="cleaning up after shim disconnected" id=38a6eb3a471c28198b810eb2f0ac1854fd8253fef2b75f411a5d07e78298715b namespace=k8s.io Apr 13 20:19:54.319932 containerd[2111]: time="2026-04-13T20:19:54.319358502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:19:54.375970 kubelet[3399]: I0413 20:19:54.375920 3399 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:19:54.520964 containerd[2111]: time="2026-04-13T20:19:54.520347974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs499,Uid:b6a5c10c-7432-4751-8918-4251f504fa44,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:54.646760 kubelet[3399]: I0413 20:19:54.646727 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr5b\" (UniqueName: \"kubernetes.io/projected/82c2fe9b-a341-45a1-998b-bc57b78f0096-kube-api-access-nhr5b\") pod \"calico-apiserver-5944887d59-qk82w\" (UID: \"82c2fe9b-a341-45a1-998b-bc57b78f0096\") " pod="calico-system/calico-apiserver-5944887d59-qk82w" Apr 13 20:19:54.647114 kubelet[3399]: I0413 20:19:54.646934 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8gv8\" (UniqueName: \"kubernetes.io/projected/071cac38-3892-4c51-a239-a556805da745-kube-api-access-l8gv8\") pod \"coredns-674b8bbfcf-xqfg2\" (UID: \"071cac38-3892-4c51-a239-a556805da745\") " pod="kube-system/coredns-674b8bbfcf-xqfg2" Apr 13 20:19:54.647114 kubelet[3399]: I0413 20:19:54.647077 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-nginx-config\") pod \"whisker-7f9f779ffd-5rvbw\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:54.647377 kubelet[3399]: I0413 20:19:54.647264 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb4a49c-3535-4abb-96bd-a03060aeb7aa-tigera-ca-bundle\") pod \"calico-kube-controllers-6b846df46d-nfrl6\" (UID: \"2fb4a49c-3535-4abb-96bd-a03060aeb7aa\") " pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" Apr 13 20:19:54.647377 kubelet[3399]: I0413 20:19:54.647294 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-backend-key-pair\") pod \"whisker-7f9f779ffd-5rvbw\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:54.647377 kubelet[3399]: I0413 20:19:54.647427 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qjj\" (UniqueName: \"kubernetes.io/projected/2fb4a49c-3535-4abb-96bd-a03060aeb7aa-kube-api-access-l5qjj\") pod \"calico-kube-controllers-6b846df46d-nfrl6\" (UID: \"2fb4a49c-3535-4abb-96bd-a03060aeb7aa\") " pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" Apr 13 20:19:54.647851 kubelet[3399]: I0413 20:19:54.647558 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/071cac38-3892-4c51-a239-a556805da745-config-volume\") pod \"coredns-674b8bbfcf-xqfg2\" (UID: \"071cac38-3892-4c51-a239-a556805da745\") " pod="kube-system/coredns-674b8bbfcf-xqfg2" Apr 13 20:19:54.647851 kubelet[3399]: I0413 20:19:54.647592 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7110801-c02b-4008-8b00-b210e85462f6-config\") pod \"goldmane-5b85766d88-6p8z6\" (UID: \"e7110801-c02b-4008-8b00-b210e85462f6\") " pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:54.647851 kubelet[3399]: I0413 20:19:54.647613 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e7110801-c02b-4008-8b00-b210e85462f6-goldmane-key-pair\") pod \"goldmane-5b85766d88-6p8z6\" (UID: \"e7110801-c02b-4008-8b00-b210e85462f6\") " pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:54.648161 kubelet[3399]: I0413 20:19:54.648017 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7110801-c02b-4008-8b00-b210e85462f6-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-6p8z6\" (UID: \"e7110801-c02b-4008-8b00-b210e85462f6\") " pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:54.648161 kubelet[3399]: I0413 20:19:54.648072 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrf7\" (UniqueName: \"kubernetes.io/projected/e7110801-c02b-4008-8b00-b210e85462f6-kube-api-access-bmrf7\") pod \"goldmane-5b85766d88-6p8z6\" (UID: \"e7110801-c02b-4008-8b00-b210e85462f6\") " pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:54.648161 kubelet[3399]: I0413 20:19:54.648112 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdrv\" (UniqueName: \"kubernetes.io/projected/fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6-kube-api-access-5wdrv\") pod \"calico-apiserver-5944887d59-hnxzb\" (UID: \"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6\") " pod="calico-system/calico-apiserver-5944887d59-hnxzb" Apr 13 20:19:54.648635 kubelet[3399]: I0413 20:19:54.648251 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsb4\" (UniqueName: \"kubernetes.io/projected/ef0b011b-1723-449b-9203-e921c49b3890-kube-api-access-kfsb4\") pod \"coredns-674b8bbfcf-497fh\" (UID: \"ef0b011b-1723-449b-9203-e921c49b3890\") " pod="kube-system/coredns-674b8bbfcf-497fh" Apr 13 20:19:54.648635 kubelet[3399]: I0413 20:19:54.648422 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-ca-bundle\") pod \"whisker-7f9f779ffd-5rvbw\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:54.648924 kubelet[3399]: I0413 20:19:54.648464 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6-calico-apiserver-certs\") pod \"calico-apiserver-5944887d59-hnxzb\" (UID: \"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6\") " pod="calico-system/calico-apiserver-5944887d59-hnxzb" Apr 13 20:19:54.648924 kubelet[3399]: I0413 20:19:54.648794 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0b011b-1723-449b-9203-e921c49b3890-config-volume\") pod \"coredns-674b8bbfcf-497fh\" (UID: \"ef0b011b-1723-449b-9203-e921c49b3890\") " pod="kube-system/coredns-674b8bbfcf-497fh" Apr 13 20:19:54.648924 kubelet[3399]: I0413 20:19:54.648839 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82c2fe9b-a341-45a1-998b-bc57b78f0096-calico-apiserver-certs\") pod \"calico-apiserver-5944887d59-qk82w\" (UID: \"82c2fe9b-a341-45a1-998b-bc57b78f0096\") " pod="calico-system/calico-apiserver-5944887d59-qk82w" Apr 13 20:19:54.648924 kubelet[3399]: I0413 20:19:54.648864 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjmk\" (UniqueName: \"kubernetes.io/projected/36b65341-e0c3-462c-84cd-6efbb156c217-kube-api-access-xsjmk\") pod \"whisker-7f9f779ffd-5rvbw\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:54.826148 containerd[2111]: time="2026-04-13T20:19:54.826095781Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:19:54.884393 containerd[2111]: time="2026-04-13T20:19:54.884345135Z" level=info msg="CreateContainer within sandbox \"3b7a3e139747376baa00f344eabf1fa9b7b0b4c793695a733c96af00e359886a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc4038175f9c0842f7b43eff43861bdeef9cc3275721244bb71d5c9dafdebd12\"" Apr 13 20:19:54.897301 containerd[2111]: time="2026-04-13T20:19:54.897177196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-hnxzb,Uid:fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:54.898845 containerd[2111]: time="2026-04-13T20:19:54.897902409Z" level=info msg="StartContainer for \"cc4038175f9c0842f7b43eff43861bdeef9cc3275721244bb71d5c9dafdebd12\"" Apr 13 20:19:54.907997 containerd[2111]: time="2026-04-13T20:19:54.907943078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6p8z6,Uid:e7110801-c02b-4008-8b00-b210e85462f6,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:55.092909 containerd[2111]: time="2026-04-13T20:19:55.090427410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-497fh,Uid:ef0b011b-1723-449b-9203-e921c49b3890,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:55.101465 containerd[2111]: time="2026-04-13T20:19:55.100533606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b846df46d-nfrl6,Uid:2fb4a49c-3535-4abb-96bd-a03060aeb7aa,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:55.103077 containerd[2111]: time="2026-04-13T20:19:55.103036292Z" level=info msg="StartContainer for \"cc4038175f9c0842f7b43eff43861bdeef9cc3275721244bb71d5c9dafdebd12\" returns successfully" Apr 13 20:19:55.113171 containerd[2111]: time="2026-04-13T20:19:55.111212603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqfg2,Uid:071cac38-3892-4c51-a239-a556805da745,Namespace:kube-system,Attempt:0,}" Apr 13 20:19:55.144135 containerd[2111]: time="2026-04-13T20:19:55.143765342Z" level=error msg="Failed to destroy network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.146791 containerd[2111]: time="2026-04-13T20:19:55.146750898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f779ffd-5rvbw,Uid:36b65341-e0c3-462c-84cd-6efbb156c217,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:55.155878 containerd[2111]: time="2026-04-13T20:19:55.153997773Z" level=error msg="encountered an error cleaning up failed sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.155430 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:19:55.156975 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:19:55.155476 systemd-resolved[1989]: Flushed all caches. Apr 13 20:19:55.176038 containerd[2111]: time="2026-04-13T20:19:55.175991910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-qk82w,Uid:82c2fe9b-a341-45a1-998b-bc57b78f0096,Namespace:calico-system,Attempt:0,}" Apr 13 20:19:55.193049 containerd[2111]: time="2026-04-13T20:19:55.192853080Z" level=error msg="Failed to destroy network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.195548 containerd[2111]: time="2026-04-13T20:19:55.193717611Z" level=error msg="encountered an error cleaning up failed sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.195548 containerd[2111]: time="2026-04-13T20:19:55.194006994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs499,Uid:b6a5c10c-7432-4751-8918-4251f504fa44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.238524 containerd[2111]: time="2026-04-13T20:19:55.237784014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-hnxzb,Uid:fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.245916 kubelet[3399]: E0413 20:19:55.245750 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.251289 kubelet[3399]: E0413 20:19:55.247969 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.251289 kubelet[3399]: E0413 20:19:55.248954 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:55.251876 kubelet[3399]: E0413 20:19:55.251836 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5944887d59-hnxzb" Apr 13 20:19:55.255731 kubelet[3399]: E0413 20:19:55.255682 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5944887d59-hnxzb" Apr 13 20:19:55.256515 kubelet[3399]: E0413 20:19:55.255788 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5944887d59-hnxzb_calico-system(fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5944887d59-hnxzb_calico-system(fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5944887d59-hnxzb" podUID="fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6" Apr 13 20:19:55.256982 kubelet[3399]: E0413 20:19:55.256946 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bs499" Apr 13 20:19:55.257100 kubelet[3399]: E0413 20:19:55.257041 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bs499_calico-system(b6a5c10c-7432-4751-8918-4251f504fa44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bs499_calico-system(b6a5c10c-7432-4751-8918-4251f504fa44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bs499" podUID="b6a5c10c-7432-4751-8918-4251f504fa44" Apr 13 20:19:55.299915 containerd[2111]: time="2026-04-13T20:19:55.299868164Z" level=error msg="Failed to destroy network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.300527 containerd[2111]: time="2026-04-13T20:19:55.300485880Z" level=error msg="encountered an error cleaning up failed sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.301046 containerd[2111]: time="2026-04-13T20:19:55.300682908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6p8z6,Uid:e7110801-c02b-4008-8b00-b210e85462f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.305095 kubelet[3399]: E0413 20:19:55.301598 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.305095 kubelet[3399]: E0413 20:19:55.301666 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:55.305095 kubelet[3399]: E0413 20:19:55.301694 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-6p8z6" Apr 13 20:19:55.305513 kubelet[3399]: E0413 20:19:55.301843 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-6p8z6_calico-system(e7110801-c02b-4008-8b00-b210e85462f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-6p8z6_calico-system(e7110801-c02b-4008-8b00-b210e85462f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-6p8z6" podUID="e7110801-c02b-4008-8b00-b210e85462f6" Apr 13 20:19:55.356764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00-shm.mount: Deactivated successfully. Apr 13 20:19:55.470609 containerd[2111]: time="2026-04-13T20:19:55.468874175Z" level=error msg="Failed to destroy network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.478498 containerd[2111]: time="2026-04-13T20:19:55.477277460Z" level=error msg="encountered an error cleaning up failed sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.478498 containerd[2111]: time="2026-04-13T20:19:55.478442413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b846df46d-nfrl6,Uid:2fb4a49c-3535-4abb-96bd-a03060aeb7aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.481574 kubelet[3399]: E0413 20:19:55.479443 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.481574 kubelet[3399]: E0413 20:19:55.479508 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" Apr 13 20:19:55.481574 kubelet[3399]: E0413 20:19:55.479542 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" Apr 13 20:19:55.480276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d-shm.mount: Deactivated successfully. Apr 13 20:19:55.482362 kubelet[3399]: E0413 20:19:55.479625 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b846df46d-nfrl6_calico-system(2fb4a49c-3535-4abb-96bd-a03060aeb7aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b846df46d-nfrl6_calico-system(2fb4a49c-3535-4abb-96bd-a03060aeb7aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" podUID="2fb4a49c-3535-4abb-96bd-a03060aeb7aa" Apr 13 20:19:55.545875 containerd[2111]: time="2026-04-13T20:19:55.542422513Z" level=error msg="Failed to destroy network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.547127 containerd[2111]: time="2026-04-13T20:19:55.547056657Z" level=error msg="encountered an error cleaning up failed sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.548770 containerd[2111]: time="2026-04-13T20:19:55.547879619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqfg2,Uid:071cac38-3892-4c51-a239-a556805da745,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.551254 kubelet[3399]: E0413 20:19:55.551116 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.553065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9-shm.mount: Deactivated successfully. Apr 13 20:19:55.560155 kubelet[3399]: E0413 20:19:55.551441 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqfg2" Apr 13 20:19:55.560155 kubelet[3399]: E0413 20:19:55.556217 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqfg2" Apr 13 20:19:55.560155 kubelet[3399]: E0413 20:19:55.556282 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xqfg2_kube-system(071cac38-3892-4c51-a239-a556805da745)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xqfg2_kube-system(071cac38-3892-4c51-a239-a556805da745)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xqfg2" podUID="071cac38-3892-4c51-a239-a556805da745" Apr 13 20:19:55.587259 containerd[2111]: time="2026-04-13T20:19:55.587204758Z" level=error msg="Failed to destroy network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.611477 containerd[2111]: time="2026-04-13T20:19:55.611425722Z" level=error msg="Failed to destroy network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.612080 containerd[2111]: time="2026-04-13T20:19:55.612043161Z" level=error msg="encountered an error cleaning up failed sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.612859 containerd[2111]: time="2026-04-13T20:19:55.612265120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f779ffd-5rvbw,Uid:36b65341-e0c3-462c-84cd-6efbb156c217,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.614166 containerd[2111]: time="2026-04-13T20:19:55.613015258Z" level=error msg="encountered an error cleaning up failed sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.614334 containerd[2111]: time="2026-04-13T20:19:55.614302958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-497fh,Uid:ef0b011b-1723-449b-9203-e921c49b3890,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.617525 kubelet[3399]: E0413 20:19:55.614616 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.617525 kubelet[3399]: E0413 20:19:55.614678 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-497fh" Apr 13 20:19:55.617525 kubelet[3399]: E0413 20:19:55.614727 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-497fh" Apr 13 20:19:55.617721 kubelet[3399]: E0413 20:19:55.614788 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-497fh_kube-system(ef0b011b-1723-449b-9203-e921c49b3890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-497fh_kube-system(ef0b011b-1723-449b-9203-e921c49b3890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-497fh" podUID="ef0b011b-1723-449b-9203-e921c49b3890" Apr 13 20:19:55.617721 kubelet[3399]: E0413 20:19:55.617249 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.617721 kubelet[3399]: E0413 20:19:55.617307 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:55.617931 kubelet[3399]: E0413 20:19:55.617339 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9f779ffd-5rvbw" Apr 13 20:19:55.617931 kubelet[3399]: E0413 20:19:55.617392 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f9f779ffd-5rvbw_calico-system(36b65341-e0c3-462c-84cd-6efbb156c217)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f9f779ffd-5rvbw_calico-system(36b65341-e0c3-462c-84cd-6efbb156c217)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f9f779ffd-5rvbw" podUID="36b65341-e0c3-462c-84cd-6efbb156c217" Apr 13 20:19:55.621620 containerd[2111]: time="2026-04-13T20:19:55.621569301Z" level=error msg="Failed to destroy network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.622964 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597-shm.mount: Deactivated successfully. Apr 13 20:19:55.623213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02-shm.mount: Deactivated successfully. Apr 13 20:19:55.629100 containerd[2111]: time="2026-04-13T20:19:55.628207115Z" level=error msg="encountered an error cleaning up failed sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.631503 containerd[2111]: time="2026-04-13T20:19:55.629660361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-qk82w,Uid:82c2fe9b-a341-45a1-998b-bc57b78f0096,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.632012 kubelet[3399]: E0413 20:19:55.631736 3399 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:19:55.632012 kubelet[3399]: E0413 20:19:55.631857 3399 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5944887d59-qk82w" Apr 13 20:19:55.632012 kubelet[3399]: E0413 20:19:55.631889 3399 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5944887d59-qk82w" Apr 13 20:19:55.633084 kubelet[3399]: E0413 20:19:55.632827 3399 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5944887d59-qk82w_calico-system(82c2fe9b-a341-45a1-998b-bc57b78f0096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5944887d59-qk82w_calico-system(82c2fe9b-a341-45a1-998b-bc57b78f0096)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5944887d59-qk82w" podUID="82c2fe9b-a341-45a1-998b-bc57b78f0096" Apr 13 20:19:55.633999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5-shm.mount: Deactivated successfully. Apr 13 20:19:55.745768 kubelet[3399]: I0413 20:19:55.745653 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:19:55.830743 kubelet[3399]: I0413 20:19:55.829942 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:19:55.831623 containerd[2111]: time="2026-04-13T20:19:55.831232347Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" Apr 13 20:19:55.838577 containerd[2111]: time="2026-04-13T20:19:55.835756492Z" level=info msg="Ensure that sandbox a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597 in task-service has been cleanup successfully" Apr 13 20:19:55.856412 kubelet[3399]: I0413 20:19:55.856383 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:19:55.857118 containerd[2111]: time="2026-04-13T20:19:55.857084736Z" level=info msg="StopPodSandbox for \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\"" Apr 13 20:19:55.859007 containerd[2111]: time="2026-04-13T20:19:55.858973534Z" level=info msg="Ensure that sandbox acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9 in task-service has been cleanup successfully" Apr 13 20:19:55.864773 containerd[2111]: time="2026-04-13T20:19:55.864722799Z" level=info msg="StopPodSandbox for \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\"" Apr 13 20:19:55.865079 containerd[2111]: time="2026-04-13T20:19:55.864968642Z" level=info msg="Ensure that sandbox d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02 in task-service has been cleanup successfully" Apr 13 20:19:55.883997 kubelet[3399]: I0413 20:19:55.883945 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:19:55.887592 containerd[2111]: time="2026-04-13T20:19:55.887449682Z" level=info msg="StopPodSandbox for \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\"" Apr 13 20:19:55.894371 containerd[2111]: time="2026-04-13T20:19:55.894162040Z" level=info msg="Ensure that sandbox 7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799 in task-service has been cleanup successfully" Apr 13 20:19:55.898786 kubelet[3399]: I0413 20:19:55.898594 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:19:55.903504 containerd[2111]: time="2026-04-13T20:19:55.902771120Z" level=info msg="StopPodSandbox for \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\"" Apr 13 20:19:55.906867 containerd[2111]: time="2026-04-13T20:19:55.906513423Z" level=info msg="Ensure that sandbox 513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00 in task-service has been cleanup successfully" Apr 13 20:19:55.907739 kubelet[3399]: I0413 20:19:55.907650 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:19:55.911275 containerd[2111]: time="2026-04-13T20:19:55.911243580Z" level=info msg="StopPodSandbox for \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\"" Apr 13 20:19:55.914527 kubelet[3399]: I0413 20:19:55.913551 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:19:55.919787 containerd[2111]: time="2026-04-13T20:19:55.919058158Z" level=info msg="StopPodSandbox for \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\"" Apr 13 20:19:55.921518 containerd[2111]: time="2026-04-13T20:19:55.921482749Z" level=info msg="Ensure that sandbox 1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d in task-service has been cleanup successfully" Apr 13 20:19:55.932249 containerd[2111]: time="2026-04-13T20:19:55.923410744Z" level=info msg="Ensure that sandbox f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5 in task-service has been cleanup successfully" Apr 13 20:19:55.958753 kubelet[3399]: I0413 20:19:55.958602 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:19:55.963606 containerd[2111]: time="2026-04-13T20:19:55.963293831Z" level=info msg="StopPodSandbox for \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\"" Apr 13 20:19:55.964497 containerd[2111]: time="2026-04-13T20:19:55.963832046Z" level=info msg="Ensure that sandbox e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802 in task-service has been cleanup successfully" Apr 13 20:19:56.115941 kubelet[3399]: I0413 20:19:56.109482 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-497b7" podStartSLOduration=4.008431816 podStartE2EDuration="20.093944012s" podCreationTimestamp="2026-04-13 20:19:36 +0000 UTC" firstStartedPulling="2026-04-13 20:19:37.067017356 +0000 UTC m=+22.838760494" lastFinishedPulling="2026-04-13 20:19:53.152529552 +0000 UTC m=+38.924272690" observedRunningTime="2026-04-13 20:19:55.802890653 +0000 UTC m=+41.574633814" watchObservedRunningTime="2026-04-13 20:19:56.093944012 +0000 UTC m=+41.865687171" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.347 [INFO][4714] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.354 [INFO][4714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" iface="eth0" netns="/var/run/netns/cni-431c2050-ea41-b993-78be-77764d2cf794" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.354 [INFO][4714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" iface="eth0" netns="/var/run/netns/cni-431c2050-ea41-b993-78be-77764d2cf794" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.359 [INFO][4714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" iface="eth0" netns="/var/run/netns/cni-431c2050-ea41-b993-78be-77764d2cf794" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.359 [INFO][4714] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.359 [INFO][4714] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.883 [INFO][4833] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.884 [INFO][4833] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.884 [INFO][4833] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.911 [WARNING][4833] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.911 [INFO][4833] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.913 [INFO][4833] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:56.927251 containerd[2111]: 2026-04-13 20:19:56.922 [INFO][4714] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:19:56.934440 systemd[1]: run-netns-cni\x2d431c2050\x2dea41\x2db993\x2d78be\x2d77764d2cf794.mount: Deactivated successfully. Apr 13 20:19:56.937071 containerd[2111]: time="2026-04-13T20:19:56.936934956Z" level=info msg="TearDown network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" successfully" Apr 13 20:19:56.937071 containerd[2111]: time="2026-04-13T20:19:56.936987407Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" returns successfully" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.347 [INFO][4718] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.348 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" iface="eth0" netns="/var/run/netns/cni-ac0268be-a735-771a-2e2e-010dcaeb87f1" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.352 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" iface="eth0" netns="/var/run/netns/cni-ac0268be-a735-771a-2e2e-010dcaeb87f1" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.358 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" iface="eth0" netns="/var/run/netns/cni-ac0268be-a735-771a-2e2e-010dcaeb87f1" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.358 [INFO][4718] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.358 [INFO][4718] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.883 [INFO][4834] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.885 [INFO][4834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.913 [INFO][4834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.925 [WARNING][4834] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.925 [INFO][4834] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.934 [INFO][4834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:56.950535 containerd[2111]: 2026-04-13 20:19:56.945 [INFO][4718] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:19:56.950535 containerd[2111]: time="2026-04-13T20:19:56.950453379Z" level=info msg="TearDown network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" successfully" Apr 13 20:19:56.950535 containerd[2111]: time="2026-04-13T20:19:56.950486805Z" level=info msg="StopPodSandbox for \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" returns successfully" Apr 13 20:19:56.961263 systemd[1]: run-netns-cni\x2dac0268be\x2da735\x2d771a\x2d2e2e\x2d010dcaeb87f1.mount: Deactivated successfully. Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.382 [INFO][4746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.385 [INFO][4746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" iface="eth0" netns="/var/run/netns/cni-00308667-e2bc-89ff-fc66-cd4bdb50dfcd" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.391 [INFO][4746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" iface="eth0" netns="/var/run/netns/cni-00308667-e2bc-89ff-fc66-cd4bdb50dfcd" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.395 [INFO][4746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" iface="eth0" netns="/var/run/netns/cni-00308667-e2bc-89ff-fc66-cd4bdb50dfcd" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.395 [INFO][4746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.395 [INFO][4746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.890 [INFO][4840] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.896 [INFO][4840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.935 [INFO][4840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.945 [WARNING][4840] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.945 [INFO][4840] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.950 [INFO][4840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:56.975179 containerd[2111]: 2026-04-13 20:19:56.964 [INFO][4746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:19:56.975179 containerd[2111]: time="2026-04-13T20:19:56.973576031Z" level=info msg="TearDown network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" successfully" Apr 13 20:19:56.975179 containerd[2111]: time="2026-04-13T20:19:56.973607892Z" level=info msg="StopPodSandbox for \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" returns successfully" Apr 13 20:19:56.975179 containerd[2111]: time="2026-04-13T20:19:56.974032457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f779ffd-5rvbw,Uid:36b65341-e0c3-462c-84cd-6efbb156c217,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:56.984405 systemd[1]: run-netns-cni\x2d00308667\x2de2bc\x2d89ff\x2dfc66\x2dcd4bdb50dfcd.mount: Deactivated successfully. Apr 13 20:19:56.990949 containerd[2111]: time="2026-04-13T20:19:56.990871121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqfg2,Uid:071cac38-3892-4c51-a239-a556805da745,Namespace:kube-system,Attempt:1,}" Apr 13 20:19:57.004893 containerd[2111]: time="2026-04-13T20:19:57.004787730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6p8z6,Uid:e7110801-c02b-4008-8b00-b210e85462f6,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.526 [INFO][4719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.527 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" iface="eth0" netns="/var/run/netns/cni-a77222d2-aac4-f509-e221-b16e8626789d" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.528 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" iface="eth0" netns="/var/run/netns/cni-a77222d2-aac4-f509-e221-b16e8626789d" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.531 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" iface="eth0" netns="/var/run/netns/cni-a77222d2-aac4-f509-e221-b16e8626789d" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.531 [INFO][4719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.531 [INFO][4719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.895 [INFO][4857] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.900 [INFO][4857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.950 [INFO][4857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.970 [WARNING][4857] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.970 [INFO][4857] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.979 [INFO][4857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.011364 containerd[2111]: 2026-04-13 20:19:56.988 [INFO][4719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:19:57.020850 containerd[2111]: time="2026-04-13T20:19:57.020583090Z" level=info msg="TearDown network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" successfully" Apr 13 20:19:57.020850 containerd[2111]: time="2026-04-13T20:19:57.020616501Z" level=info msg="StopPodSandbox for \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" returns successfully" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.584 [INFO][4788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.584 [INFO][4788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" iface="eth0" netns="/var/run/netns/cni-2ac360bb-eadb-1d56-4c01-2758e329b8b0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.586 [INFO][4788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" iface="eth0" netns="/var/run/netns/cni-2ac360bb-eadb-1d56-4c01-2758e329b8b0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.589 [INFO][4788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" iface="eth0" netns="/var/run/netns/cni-2ac360bb-eadb-1d56-4c01-2758e329b8b0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.589 [INFO][4788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.589 [INFO][4788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.907 [INFO][4866] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.908 [INFO][4866] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.979 [INFO][4866] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.996 [WARNING][4866] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.996 [INFO][4866] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:56.999 [INFO][4866] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.020850 containerd[2111]: 2026-04-13 20:19:57.003 [INFO][4788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:19:57.026404 containerd[2111]: time="2026-04-13T20:19:57.026357145Z" level=info msg="TearDown network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" successfully" Apr 13 20:19:57.029186 systemd[1]: run-netns-cni\x2da77222d2\x2daac4\x2df509\x2de221\x2db16e8626789d.mount: Deactivated successfully. Apr 13 20:19:57.029685 systemd[1]: run-netns-cni\x2d2ac360bb\x2deadb\x2d1d56\x2d4c01\x2d2758e329b8b0.mount: Deactivated successfully. Apr 13 20:19:57.031947 containerd[2111]: time="2026-04-13T20:19:57.030765081Z" level=info msg="StopPodSandbox for \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" returns successfully" Apr 13 20:19:57.031947 containerd[2111]: time="2026-04-13T20:19:57.031194849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-497fh,Uid:ef0b011b-1723-449b-9203-e921c49b3890,Namespace:kube-system,Attempt:1,}" Apr 13 20:19:57.036049 containerd[2111]: time="2026-04-13T20:19:57.036010907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b846df46d-nfrl6,Uid:2fb4a49c-3535-4abb-96bd-a03060aeb7aa,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.542 [INFO][4772] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.546 [INFO][4772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" iface="eth0" netns="/var/run/netns/cni-da91b940-a320-b2a7-fca9-355821e8868e" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.549 [INFO][4772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" iface="eth0" netns="/var/run/netns/cni-da91b940-a320-b2a7-fca9-355821e8868e" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.566 [INFO][4772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" iface="eth0" netns="/var/run/netns/cni-da91b940-a320-b2a7-fca9-355821e8868e" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.566 [INFO][4772] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.566 [INFO][4772] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.909 [INFO][4863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.910 [INFO][4863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:56.998 [INFO][4863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:57.012 [WARNING][4863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:57.012 [INFO][4863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:57.014 [INFO][4863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.036931 containerd[2111]: 2026-04-13 20:19:57.021 [INFO][4772] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:19:57.041201 containerd[2111]: time="2026-04-13T20:19:57.039339733Z" level=info msg="TearDown network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" successfully" Apr 13 20:19:57.041201 containerd[2111]: time="2026-04-13T20:19:57.039370817Z" level=info msg="StopPodSandbox for \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" returns successfully" Apr 13 20:19:57.042411 containerd[2111]: time="2026-04-13T20:19:57.042365436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-hnxzb,Uid:fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.590 [INFO][4775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.592 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" iface="eth0" netns="/var/run/netns/cni-be0631dc-2c77-bebe-328e-c1b0ab983b37" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.593 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" iface="eth0" netns="/var/run/netns/cni-be0631dc-2c77-bebe-328e-c1b0ab983b37" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.594 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" iface="eth0" netns="/var/run/netns/cni-be0631dc-2c77-bebe-328e-c1b0ab983b37" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.595 [INFO][4775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.595 [INFO][4775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.915 [INFO][4868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:56.917 [INFO][4868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:57.014 [INFO][4868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:57.026 [WARNING][4868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:57.029 [INFO][4868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:57.039 [INFO][4868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.047390 containerd[2111]: 2026-04-13 20:19:57.045 [INFO][4775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:19:57.051216 containerd[2111]: time="2026-04-13T20:19:57.047700410Z" level=info msg="TearDown network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" successfully" Apr 13 20:19:57.051216 containerd[2111]: time="2026-04-13T20:19:57.047728809Z" level=info msg="StopPodSandbox for \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" returns successfully" Apr 13 20:19:57.051216 containerd[2111]: time="2026-04-13T20:19:57.048436583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs499,Uid:b6a5c10c-7432-4751-8918-4251f504fa44,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.680 [INFO][4812] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.681 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" iface="eth0" netns="/var/run/netns/cni-5a001624-003b-f302-eda2-edb2730026e1" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.682 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" iface="eth0" netns="/var/run/netns/cni-5a001624-003b-f302-eda2-edb2730026e1" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.690 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" iface="eth0" netns="/var/run/netns/cni-5a001624-003b-f302-eda2-edb2730026e1" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.690 [INFO][4812] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.690 [INFO][4812] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.925 [INFO][4887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:56.926 [INFO][4887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:57.036 [INFO][4887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:57.054 [WARNING][4887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:57.054 [INFO][4887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:57.056 [INFO][4887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.063577 containerd[2111]: 2026-04-13 20:19:57.059 [INFO][4812] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:19:57.076620 containerd[2111]: time="2026-04-13T20:19:57.076558041Z" level=info msg="TearDown network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" successfully" Apr 13 20:19:57.076804 containerd[2111]: time="2026-04-13T20:19:57.076784104Z" level=info msg="StopPodSandbox for \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" returns successfully" Apr 13 20:19:57.100538 containerd[2111]: time="2026-04-13T20:19:57.100479985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-qk82w,Uid:82c2fe9b-a341-45a1-998b-bc57b78f0096,Namespace:calico-system,Attempt:1,}" Apr 13 20:19:57.206732 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:19:57.205198 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:19:57.205227 systemd-resolved[1989]: Flushed all caches. Apr 13 20:19:57.348691 systemd[1]: run-netns-cni\x2d5a001624\x2d003b\x2df302\x2deda2\x2dedb2730026e1.mount: Deactivated successfully. Apr 13 20:19:57.348898 systemd[1]: run-netns-cni\x2dda91b940\x2da320\x2db2a7\x2dfca9\x2d355821e8868e.mount: Deactivated successfully. Apr 13 20:19:57.349069 systemd[1]: run-netns-cni\x2dbe0631dc\x2d2c77\x2dbebe\x2d328e\x2dc1b0ab983b37.mount: Deactivated successfully. Apr 13 20:19:57.576973 systemd-networkd[1657]: cali76f76e4dd09: Link UP Apr 13 20:19:57.577266 systemd-networkd[1657]: cali76f76e4dd09: Gained carrier Apr 13 20:19:57.586423 (udev-worker)[5045]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.168 [ERROR][4928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.213 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0 whisker-7f9f779ffd- calico-system 36b65341-e0c3-462c-84cd-6efbb156c217 886 0 2026-04-13 20:19:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f9f779ffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-28 whisker-7f9f779ffd-5rvbw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali76f76e4dd09 [] [] }} ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.213 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.418 [INFO][4964] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.447 [INFO][4964] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"whisker-7f9f779ffd-5rvbw", "timestamp":"2026-04-13 20:19:57.41898385 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b2000)} Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.447 [INFO][4964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.447 [INFO][4964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.447 [INFO][4964] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.451 [INFO][4964] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.461 [INFO][4964] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.472 [INFO][4964] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.477 [INFO][4964] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.484 [INFO][4964] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.484 [INFO][4964] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.488 [INFO][4964] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4 Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.499 [INFO][4964] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.520 [INFO][4964] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.65/26] block=192.168.120.64/26 handle="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.520 [INFO][4964] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.65/26] handle="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" host="ip-172-31-17-28" Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.520 [INFO][4964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.607240 containerd[2111]: 2026-04-13 20:19:57.521 [INFO][4964] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.65/26] IPv6=[] ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.531 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0", GenerateName:"whisker-7f9f779ffd-", Namespace:"calico-system", SelfLink:"", UID:"36b65341-e0c3-462c-84cd-6efbb156c217", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9f779ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"whisker-7f9f779ffd-5rvbw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f76e4dd09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.534 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.65/32] ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.534 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76f76e4dd09 ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.565 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.567 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0", GenerateName:"whisker-7f9f779ffd-", Namespace:"calico-system", SelfLink:"", UID:"36b65341-e0c3-462c-84cd-6efbb156c217", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9f779ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4", Pod:"whisker-7f9f779ffd-5rvbw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f76e4dd09", MAC:"0a:5e:ff:25:c1:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:57.609693 containerd[2111]: 2026-04-13 20:19:57.601 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Namespace="calico-system" Pod="whisker-7f9f779ffd-5rvbw" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:19:57.797052 systemd-networkd[1657]: caliafc92d849ad: Link UP Apr 13 20:19:57.798909 systemd-networkd[1657]: caliafc92d849ad: Gained carrier Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.243 [ERROR][4942] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.263 [INFO][4942] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0 coredns-674b8bbfcf- kube-system 071cac38-3892-4c51-a239-a556805da745 887 0 2026-04-13 20:19:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-28 coredns-674b8bbfcf-xqfg2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliafc92d849ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.263 [INFO][4942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.649 [INFO][4974] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" HandleID="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.686 [INFO][4974] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" HandleID="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e890), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-28", "pod":"coredns-674b8bbfcf-xqfg2", "timestamp":"2026-04-13 20:19:57.649123993 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004842c0)} Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.686 [INFO][4974] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.686 [INFO][4974] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.686 [INFO][4974] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.702 [INFO][4974] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.731 [INFO][4974] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.738 [INFO][4974] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.744 [INFO][4974] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.752 [INFO][4974] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.752 [INFO][4974] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.755 [INFO][4974] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.766 [INFO][4974] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4974] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.66/26] block=192.168.120.64/26 handle="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4974] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.66/26] handle="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" host="ip-172-31-17-28" Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4974] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:57.851285 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4974] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.66/26] IPv6=[] ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" HandleID="k8s-pod-network.e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.792 [INFO][4942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071cac38-3892-4c51-a239-a556805da745", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"coredns-674b8bbfcf-xqfg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafc92d849ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.792 [INFO][4942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.66/32] ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.792 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafc92d849ad ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.801 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.807 [INFO][4942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071cac38-3892-4c51-a239-a556805da745", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a", Pod:"coredns-674b8bbfcf-xqfg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafc92d849ad", MAC:"9e:a8:7e:90:5d:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:57.852337 containerd[2111]: 2026-04-13 20:19:57.832 [INFO][4942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqfg2" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:19:57.983205 systemd-networkd[1657]: calie59edc271c8: Link UP Apr 13 20:19:57.987405 systemd-networkd[1657]: calie59edc271c8: Gained carrier Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.260 [ERROR][4944] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.284 [INFO][4944] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0 goldmane-5b85766d88- calico-system e7110801-c02b-4008-8b00-b210e85462f6 888 0 2026-04-13 20:19:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-28 goldmane-5b85766d88-6p8z6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie59edc271c8 [] [] }} ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.284 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.679 [INFO][4990] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" HandleID="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.721 [INFO][4990] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" HandleID="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123b40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"goldmane-5b85766d88-6p8z6", "timestamp":"2026-04-13 20:19:57.679698593 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000405760)} Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.722 [INFO][4990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.778 [INFO][4990] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.796 [INFO][4990] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.828 [INFO][4990] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.858 [INFO][4990] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.863 [INFO][4990] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.869 [INFO][4990] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.869 [INFO][4990] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.875 [INFO][4990] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.886 [INFO][4990] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.903 [INFO][4990] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.67/26] block=192.168.120.64/26 handle="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.903 [INFO][4990] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.67/26] handle="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" host="ip-172-31-17-28" Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.904 [INFO][4990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:58.095606 containerd[2111]: 2026-04-13 20:19:57.904 [INFO][4990] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.67/26] IPv6=[] ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" HandleID="k8s-pod-network.9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:57.934 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e7110801-c02b-4008-8b00-b210e85462f6", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"goldmane-5b85766d88-6p8z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie59edc271c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:57.940 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.67/32] ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:57.940 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie59edc271c8 ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:58.005 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:58.015 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e7110801-c02b-4008-8b00-b210e85462f6", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e", Pod:"goldmane-5b85766d88-6p8z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie59edc271c8", MAC:"a6:37:d1:95:be:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.097624 containerd[2111]: 2026-04-13 20:19:58.040 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e" Namespace="calico-system" Pod="goldmane-5b85766d88-6p8z6" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:19:58.220746 containerd[2111]: time="2026-04-13T20:19:58.208624793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:58.220746 containerd[2111]: time="2026-04-13T20:19:58.208723888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:58.220746 containerd[2111]: time="2026-04-13T20:19:58.208743906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.267117 containerd[2111]: time="2026-04-13T20:19:58.263511831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:58.267117 containerd[2111]: time="2026-04-13T20:19:58.263591296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:58.267117 containerd[2111]: time="2026-04-13T20:19:58.263615933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.292894 containerd[2111]: time="2026-04-13T20:19:58.291514564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.340289 systemd-networkd[1657]: calia5efbe8fe73: Link UP Apr 13 20:19:58.340629 systemd-networkd[1657]: calia5efbe8fe73: Gained carrier Apr 13 20:19:58.359383 containerd[2111]: time="2026-04-13T20:19:58.345585843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.674 [ERROR][4975] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.737 [INFO][4975] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0 coredns-674b8bbfcf- kube-system ef0b011b-1723-449b-9203-e921c49b3890 892 0 2026-04-13 20:19:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-28 coredns-674b8bbfcf-497fh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5efbe8fe73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.737 [INFO][4975] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.841 [INFO][5077] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" HandleID="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.858 [INFO][5077] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" HandleID="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb7a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-28", "pod":"coredns-674b8bbfcf-497fh", "timestamp":"2026-04-13 20:19:57.841168412 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fd1e0)} Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.858 [INFO][5077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.904 [INFO][5077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.913 [INFO][5077] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.919 [INFO][5077] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:57.937 [INFO][5077] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.031 [INFO][5077] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.066 [INFO][5077] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.082 [INFO][5077] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.082 [INFO][5077] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.134 [INFO][5077] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2 Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.165 [INFO][5077] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.195 [INFO][5077] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.68/26] block=192.168.120.64/26 handle="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.201 [INFO][5077] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.68/26] handle="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" host="ip-172-31-17-28" Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.210 [INFO][5077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:58.463633 containerd[2111]: 2026-04-13 20:19:58.218 [INFO][5077] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.68/26] IPv6=[] ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" HandleID="k8s-pod-network.3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.296 [INFO][4975] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0b011b-1723-449b-9203-e921c49b3890", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"coredns-674b8bbfcf-497fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5efbe8fe73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.314 [INFO][4975] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.68/32] ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.314 [INFO][4975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5efbe8fe73 ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.340 [INFO][4975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.355 [INFO][4975] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0b011b-1723-449b-9203-e921c49b3890", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2", Pod:"coredns-674b8bbfcf-497fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5efbe8fe73", MAC:"fe:4a:c1:6a:8a:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.473853 containerd[2111]: 2026-04-13 20:19:58.404 [INFO][4975] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2" Namespace="kube-system" Pod="coredns-674b8bbfcf-497fh" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:19:58.529614 systemd-networkd[1657]: calicf4b1ea02b1: Link UP Apr 13 20:19:58.531015 systemd-networkd[1657]: calicf4b1ea02b1: Gained carrier Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.528 [ERROR][4993] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.588 [INFO][4993] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0 calico-apiserver-5944887d59- calico-system fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6 895 0 2026-04-13 20:19:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5944887d59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-28 calico-apiserver-5944887d59-hnxzb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calicf4b1ea02b1 [] [] }} ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.588 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.837 [INFO][5049] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" HandleID="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.865 [INFO][5049] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" HandleID="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"calico-apiserver-5944887d59-hnxzb", "timestamp":"2026-04-13 20:19:57.837724568 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b6420)} Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:57.865 [INFO][5049] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.218 [INFO][5049] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.218 [INFO][5049] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.274 [INFO][5049] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.293 [INFO][5049] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.381 [INFO][5049] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.388 [INFO][5049] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.407 [INFO][5049] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.408 [INFO][5049] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.439 [INFO][5049] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2 Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.451 [INFO][5049] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.470 [INFO][5049] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.69/26] block=192.168.120.64/26 handle="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.474 [INFO][5049] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.69/26] handle="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" host="ip-172-31-17-28" Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.477 [INFO][5049] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:58.591208 containerd[2111]: 2026-04-13 20:19:58.477 [INFO][5049] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.69/26] IPv6=[] ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" HandleID="k8s-pod-network.d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.508 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-apiserver-5944887d59-hnxzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf4b1ea02b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.508 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.69/32] ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.510 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf4b1ea02b1 ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.535 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.547 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2", Pod:"calico-apiserver-5944887d59-hnxzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf4b1ea02b1", MAC:"3a:89:5e:31:92:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.594306 containerd[2111]: 2026-04-13 20:19:58.580 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2" Namespace="calico-system" Pod="calico-apiserver-5944887d59-hnxzb" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:19:58.641597 containerd[2111]: time="2026-04-13T20:19:58.641376933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:58.649542 containerd[2111]: time="2026-04-13T20:19:58.643286359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:58.649542 containerd[2111]: time="2026-04-13T20:19:58.643663848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.649542 containerd[2111]: time="2026-04-13T20:19:58.645212959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.706102 containerd[2111]: time="2026-04-13T20:19:58.704600181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:58.706102 containerd[2111]: time="2026-04-13T20:19:58.704673988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:58.706102 containerd[2111]: time="2026-04-13T20:19:58.704690883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.706102 containerd[2111]: time="2026-04-13T20:19:58.705485760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:58.754731 systemd-networkd[1657]: cali76cdf3a7a3b: Link UP Apr 13 20:19:58.767745 systemd-networkd[1657]: cali76cdf3a7a3b: Gained carrier Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.589 [ERROR][5011] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.626 [INFO][5011] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0 csi-node-driver- calico-system b6a5c10c-7432-4751-8918-4251f504fa44 898 0 2026-04-13 20:19:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-28 csi-node-driver-bs499 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali76cdf3a7a3b [] [] }} ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.626 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.862 [INFO][5057] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" HandleID="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.878 [INFO][5057] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" HandleID="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000488160), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"csi-node-driver-bs499", "timestamp":"2026-04-13 20:19:57.861214966 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112c60)} Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:57.878 [INFO][5057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.486 [INFO][5057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.486 [INFO][5057] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.496 [INFO][5057] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.519 [INFO][5057] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.567 [INFO][5057] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.588 [INFO][5057] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.601 [INFO][5057] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.601 [INFO][5057] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.629 [INFO][5057] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321 Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.657 [INFO][5057] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.686 [INFO][5057] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.70/26] block=192.168.120.64/26 handle="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.687 [INFO][5057] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.70/26] handle="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" host="ip-172-31-17-28" Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.687 [INFO][5057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:58.908164 containerd[2111]: 2026-04-13 20:19:58.687 [INFO][5057] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.70/26] IPv6=[] ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" HandleID="k8s-pod-network.d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.733 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6a5c10c-7432-4751-8918-4251f504fa44", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"csi-node-driver-bs499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76cdf3a7a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.733 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.70/32] ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.733 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76cdf3a7a3b ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.792 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.819 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6a5c10c-7432-4751-8918-4251f504fa44", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321", Pod:"csi-node-driver-bs499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76cdf3a7a3b", MAC:"f2:98:87:20:02:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:58.911129 containerd[2111]: 2026-04-13 20:19:58.863 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321" Namespace="calico-system" Pod="csi-node-driver-bs499" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:19:58.982312 systemd-networkd[1657]: cali16a61cfc1f8: Link UP Apr 13 20:19:58.992434 systemd-networkd[1657]: cali16a61cfc1f8: Gained carrier Apr 13 20:19:58.997401 containerd[2111]: time="2026-04-13T20:19:58.997282737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqfg2,Uid:071cac38-3892-4c51-a239-a556805da745,Namespace:kube-system,Attempt:1,} returns sandbox id \"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a\"" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:57.656 [ERROR][5001] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:57.736 [INFO][5001] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0 calico-kube-controllers-6b846df46d- calico-system 2fb4a49c-3535-4abb-96bd-a03060aeb7aa 897 0 2026-04-13 20:19:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b846df46d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-28 calico-kube-controllers-6b846df46d-nfrl6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali16a61cfc1f8 [] [] }} ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:57.736 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.378 [INFO][5075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" HandleID="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.458 [INFO][5075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" HandleID="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"calico-kube-controllers-6b846df46d-nfrl6", "timestamp":"2026-04-13 20:19:58.378178513 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f2160)} Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.460 [INFO][5075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.701 [INFO][5075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.702 [INFO][5075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.706 [INFO][5075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.724 [INFO][5075] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.764 [INFO][5075] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.812 [INFO][5075] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.819 [INFO][5075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.819 [INFO][5075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.827 [INFO][5075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.859 [INFO][5075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.885 [INFO][5075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.71/26] block=192.168.120.64/26 handle="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.885 [INFO][5075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.71/26] handle="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" host="ip-172-31-17-28" Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.885 [INFO][5075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:59.067227 containerd[2111]: 2026-04-13 20:19:58.885 [INFO][5075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.71/26] IPv6=[] ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" HandleID="k8s-pod-network.f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:58.938 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0", GenerateName:"calico-kube-controllers-6b846df46d-", Namespace:"calico-system", SelfLink:"", UID:"2fb4a49c-3535-4abb-96bd-a03060aeb7aa", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b846df46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-kube-controllers-6b846df46d-nfrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a61cfc1f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:58.939 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.71/32] ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:58.939 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16a61cfc1f8 ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:58.993 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:59.007 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0", GenerateName:"calico-kube-controllers-6b846df46d-", Namespace:"calico-system", SelfLink:"", UID:"2fb4a49c-3535-4abb-96bd-a03060aeb7aa", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b846df46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d", Pod:"calico-kube-controllers-6b846df46d-nfrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a61cfc1f8", MAC:"32:2b:63:50:f3:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:59.076885 containerd[2111]: 2026-04-13 20:19:59.044 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d" Namespace="calico-system" Pod="calico-kube-controllers-6b846df46d-nfrl6" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:19:59.082045 containerd[2111]: time="2026-04-13T20:19:59.078542234Z" level=info msg="CreateContainer within sandbox \"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:19:59.099070 containerd[2111]: time="2026-04-13T20:19:59.098939103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:59.101785 containerd[2111]: time="2026-04-13T20:19:59.099029739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:59.101785 containerd[2111]: time="2026-04-13T20:19:59.100436667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.102768 containerd[2111]: time="2026-04-13T20:19:59.102191082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.114478 systemd-networkd[1657]: cali61e7eb10fd9: Link UP Apr 13 20:19:59.117991 systemd-networkd[1657]: cali61e7eb10fd9: Gained carrier Apr 13 20:19:59.137001 containerd[2111]: time="2026-04-13T20:19:59.136959533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-497fh,Uid:ef0b011b-1723-449b-9203-e921c49b3890,Namespace:kube-system,Attempt:1,} returns sandbox id \"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2\"" Apr 13 20:19:59.143447 containerd[2111]: time="2026-04-13T20:19:59.143346130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f779ffd-5rvbw,Uid:36b65341-e0c3-462c-84cd-6efbb156c217,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\"" Apr 13 20:19:59.158014 containerd[2111]: time="2026-04-13T20:19:59.157972690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:19:59.159621 containerd[2111]: time="2026-04-13T20:19:59.159479871Z" level=info msg="CreateContainer within sandbox \"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:19:59.192845 systemd-networkd[1657]: cali76f76e4dd09: Gained IPv6LL Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:57.715 [ERROR][5026] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:57.749 [INFO][5026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0 calico-apiserver-5944887d59- calico-system 82c2fe9b-a341-45a1-998b-bc57b78f0096 901 0 2026-04-13 20:19:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5944887d59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-28 calico-apiserver-5944887d59-qk82w eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali61e7eb10fd9 [] [] }} ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:57.751 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.382 [INFO][5080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" HandleID="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.461 [INFO][5080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" HandleID="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ccb60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"calico-apiserver-5944887d59-qk82w", "timestamp":"2026-04-13 20:19:58.382305207 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac580)} Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.462 [INFO][5080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.895 [INFO][5080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.895 [INFO][5080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.913 [INFO][5080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.964 [INFO][5080] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:58.998 [INFO][5080] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.019 [INFO][5080] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.037 [INFO][5080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.037 [INFO][5080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.053 [INFO][5080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.071 [INFO][5080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.087 [INFO][5080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.72/26] block=192.168.120.64/26 handle="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.087 [INFO][5080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.72/26] handle="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" host="ip-172-31-17-28" Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.087 [INFO][5080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:19:59.200948 containerd[2111]: 2026-04-13 20:19:59.087 [INFO][5080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.72/26] IPv6=[] ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" HandleID="k8s-pod-network.037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.102 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"82c2fe9b-a341-45a1-998b-bc57b78f0096", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-apiserver-5944887d59-qk82w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e7eb10fd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.102 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.72/32] ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.102 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61e7eb10fd9 ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.135 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.138 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"82c2fe9b-a341-45a1-998b-bc57b78f0096", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb", Pod:"calico-apiserver-5944887d59-qk82w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e7eb10fd9", MAC:"b2:e3:c7:2c:57:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:19:59.202180 containerd[2111]: 2026-04-13 20:19:59.175 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb" Namespace="calico-system" Pod="calico-apiserver-5944887d59-qk82w" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:19:59.255432 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:19:59.249802 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:19:59.249826 systemd-resolved[1989]: Flushed all caches. Apr 13 20:19:59.272371 containerd[2111]: time="2026-04-13T20:19:59.270113975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6p8z6,Uid:e7110801-c02b-4008-8b00-b210e85462f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e\"" Apr 13 20:19:59.275639 containerd[2111]: time="2026-04-13T20:19:59.274307500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:59.275639 containerd[2111]: time="2026-04-13T20:19:59.274405628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:59.275639 containerd[2111]: time="2026-04-13T20:19:59.274461685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.275639 containerd[2111]: time="2026-04-13T20:19:59.275236072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.277625 containerd[2111]: time="2026-04-13T20:19:59.277426890Z" level=info msg="CreateContainer within sandbox \"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49e15ab25fdd033ee59c54843f7df0ac9d0b81ac28add6f86b006ef948d2a532\"" Apr 13 20:19:59.281707 containerd[2111]: time="2026-04-13T20:19:59.280873616Z" level=info msg="StartContainer for \"49e15ab25fdd033ee59c54843f7df0ac9d0b81ac28add6f86b006ef948d2a532\"" Apr 13 20:19:59.322293 containerd[2111]: time="2026-04-13T20:19:59.300376968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:59.322293 containerd[2111]: time="2026-04-13T20:19:59.300462756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:59.322293 containerd[2111]: time="2026-04-13T20:19:59.300488439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.322293 containerd[2111]: time="2026-04-13T20:19:59.300628495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.369405 containerd[2111]: time="2026-04-13T20:19:59.369360389Z" level=info msg="CreateContainer within sandbox \"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06b35a58224e22f967a9de8085d3d7f56bae5ade3c877904c5bef965396c5ccd\"" Apr 13 20:19:59.374976 containerd[2111]: time="2026-04-13T20:19:59.374934186Z" level=info msg="StartContainer for \"06b35a58224e22f967a9de8085d3d7f56bae5ade3c877904c5bef965396c5ccd\"" Apr 13 20:19:59.377351 systemd-networkd[1657]: caliafc92d849ad: Gained IPv6LL Apr 13 20:19:59.480177 kernel: calico-node[5126]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:19:59.487353 systemd[1]: run-containerd-runc-k8s.io-49e15ab25fdd033ee59c54843f7df0ac9d0b81ac28add6f86b006ef948d2a532-runc.CeqcMQ.mount: Deactivated successfully. Apr 13 20:19:59.532380 containerd[2111]: time="2026-04-13T20:19:59.498376971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:19:59.532380 containerd[2111]: time="2026-04-13T20:19:59.498455149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:19:59.532380 containerd[2111]: time="2026-04-13T20:19:59.498481984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.532380 containerd[2111]: time="2026-04-13T20:19:59.498609707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:19:59.570067 systemd-networkd[1657]: calicf4b1ea02b1: Gained IPv6LL Apr 13 20:19:59.595434 containerd[2111]: time="2026-04-13T20:19:59.595392313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs499,Uid:b6a5c10c-7432-4751-8918-4251f504fa44,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321\"" Apr 13 20:19:59.711029 containerd[2111]: time="2026-04-13T20:19:59.710961585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-hnxzb,Uid:fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6,Namespace:calico-system,Attempt:1,} returns sandbox id \"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2\"" Apr 13 20:19:59.815751 containerd[2111]: time="2026-04-13T20:19:59.815386876Z" level=info msg="StartContainer for \"06b35a58224e22f967a9de8085d3d7f56bae5ade3c877904c5bef965396c5ccd\" returns successfully" Apr 13 20:19:59.888241 containerd[2111]: time="2026-04-13T20:19:59.884265817Z" level=info msg="StartContainer for \"49e15ab25fdd033ee59c54843f7df0ac9d0b81ac28add6f86b006ef948d2a532\" returns successfully" Apr 13 20:19:59.888241 containerd[2111]: time="2026-04-13T20:19:59.884405239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b846df46d-nfrl6,Uid:2fb4a49c-3535-4abb-96bd-a03060aeb7aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d\"" Apr 13 20:19:59.891366 systemd-networkd[1657]: calie59edc271c8: Gained IPv6LL Apr 13 20:19:59.897276 systemd-networkd[1657]: calia5efbe8fe73: Gained IPv6LL Apr 13 20:19:59.987602 containerd[2111]: time="2026-04-13T20:19:59.987559376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5944887d59-qk82w,Uid:82c2fe9b-a341-45a1-998b-bc57b78f0096,Namespace:calico-system,Attempt:1,} returns sandbox id \"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb\"" Apr 13 20:20:00.402034 systemd-networkd[1657]: cali76cdf3a7a3b: Gained IPv6LL Apr 13 20:20:00.532562 kubelet[3399]: I0413 20:20:00.435914 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xqfg2" podStartSLOduration=39.421607066 podStartE2EDuration="39.421607066s" podCreationTimestamp="2026-04-13 20:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:00.419388748 +0000 UTC m=+46.191131912" watchObservedRunningTime="2026-04-13 20:20:00.421607066 +0000 UTC m=+46.193350228" Apr 13 20:20:00.593769 systemd-networkd[1657]: cali16a61cfc1f8: Gained IPv6LL Apr 13 20:20:00.786311 systemd-networkd[1657]: cali61e7eb10fd9: Gained IPv6LL Apr 13 20:20:00.813333 (udev-worker)[5043]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:20:00.833860 systemd-networkd[1657]: vxlan.calico: Link UP Apr 13 20:20:00.835492 systemd-networkd[1657]: vxlan.calico: Gained carrier Apr 13 20:20:01.301687 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:01.297239 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:01.297280 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:01.381238 kubelet[3399]: I0413 20:20:01.378813 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-497fh" podStartSLOduration=40.378787497 podStartE2EDuration="40.378787497s" podCreationTimestamp="2026-04-13 20:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:00.537251094 +0000 UTC m=+46.308994255" watchObservedRunningTime="2026-04-13 20:20:01.378787497 +0000 UTC m=+47.150530658" Apr 13 20:20:01.539575 containerd[2111]: time="2026-04-13T20:20:01.539000895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:01.546278 containerd[2111]: time="2026-04-13T20:20:01.545682493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:20:01.561265 containerd[2111]: time="2026-04-13T20:20:01.552606960Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:01.576762 containerd[2111]: time="2026-04-13T20:20:01.576682768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:01.578768 containerd[2111]: time="2026-04-13T20:20:01.578328328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.419732671s" Apr 13 20:20:01.578768 containerd[2111]: time="2026-04-13T20:20:01.578417860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:20:01.585086 containerd[2111]: time="2026-04-13T20:20:01.584985055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:20:01.656433 containerd[2111]: time="2026-04-13T20:20:01.656380106Z" level=info msg="CreateContainer within sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:20:01.729345 containerd[2111]: time="2026-04-13T20:20:01.728785881Z" level=info msg="CreateContainer within sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\"" Apr 13 20:20:01.739547 containerd[2111]: time="2026-04-13T20:20:01.735041465Z" level=info msg="StartContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\"" Apr 13 20:20:02.087087 systemd[1]: run-containerd-runc-k8s.io-7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98-runc.mvuluK.mount: Deactivated successfully. Apr 13 20:20:02.364780 containerd[2111]: time="2026-04-13T20:20:02.364614928Z" level=info msg="StartContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" returns successfully" Apr 13 20:20:02.642981 systemd-networkd[1657]: vxlan.calico: Gained IPv6LL Apr 13 20:20:04.502072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059866756.mount: Deactivated successfully. Apr 13 20:20:05.122240 containerd[2111]: time="2026-04-13T20:20:05.122192916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:05.125179 containerd[2111]: time="2026-04-13T20:20:05.125006400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:20:05.128222 containerd[2111]: time="2026-04-13T20:20:05.127308600Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:05.132959 containerd[2111]: time="2026-04-13T20:20:05.132911469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:05.134280 containerd[2111]: time="2026-04-13T20:20:05.134243015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.549207987s" Apr 13 20:20:05.134426 containerd[2111]: time="2026-04-13T20:20:05.134404844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:20:05.146830 containerd[2111]: time="2026-04-13T20:20:05.146206305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:20:05.168424 containerd[2111]: time="2026-04-13T20:20:05.168306199Z" level=info msg="CreateContainer within sandbox \"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:20:05.203210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073949283.mount: Deactivated successfully. Apr 13 20:20:05.205588 containerd[2111]: time="2026-04-13T20:20:05.205548713Z" level=info msg="CreateContainer within sandbox \"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c80c19c7c8670105e960dc6549061a516884fedcdd18c46e45f626b3259541d3\"" Apr 13 20:20:05.208035 containerd[2111]: time="2026-04-13T20:20:05.206548141Z" level=info msg="StartContainer for \"c80c19c7c8670105e960dc6549061a516884fedcdd18c46e45f626b3259541d3\"" Apr 13 20:20:05.338711 containerd[2111]: time="2026-04-13T20:20:05.338667077Z" level=info msg="StartContainer for \"c80c19c7c8670105e960dc6549061a516884fedcdd18c46e45f626b3259541d3\" returns successfully" Apr 13 20:20:05.465951 kubelet[3399]: I0413 20:20:05.464829 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-6p8z6" podStartSLOduration=24.606081265 podStartE2EDuration="30.464774969s" podCreationTimestamp="2026-04-13 20:19:35 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.27768351 +0000 UTC m=+45.049426651" lastFinishedPulling="2026-04-13 20:20:05.136377213 +0000 UTC m=+50.908120355" observedRunningTime="2026-04-13 20:20:05.461450776 +0000 UTC m=+51.233193938" watchObservedRunningTime="2026-04-13 20:20:05.464774969 +0000 UTC m=+51.236518129" Apr 13 20:20:05.598130 ntpd[2061]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Apr 13 20:20:05.599103 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Apr 13 20:20:05.599222 ntpd[2061]: Listen normally on 7 cali76f76e4dd09 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:20:05.599308 ntpd[2061]: Listen normally on 8 caliafc92d849ad [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:20:05.599393 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 7 cali76f76e4dd09 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:20:05.599393 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 8 caliafc92d849ad [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:20:05.599393 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 9 calie59edc271c8 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:20:05.599352 ntpd[2061]: Listen normally on 9 calie59edc271c8 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:20:05.599596 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 10 calia5efbe8fe73 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:20:05.599596 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 11 calicf4b1ea02b1 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:20:05.599596 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 12 cali76cdf3a7a3b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:20:05.599596 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 13 cali16a61cfc1f8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:20:05.599596 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 14 cali61e7eb10fd9 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:20:05.599393 ntpd[2061]: Listen normally on 10 calia5efbe8fe73 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:20:05.599869 ntpd[2061]: 13 Apr 20:20:05 ntpd[2061]: Listen normally on 15 vxlan.calico [fe80::6482:21ff:fecf:b241%12]:123 Apr 13 20:20:05.599433 ntpd[2061]: Listen normally on 11 calicf4b1ea02b1 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:20:05.599474 ntpd[2061]: Listen normally on 12 cali76cdf3a7a3b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:20:05.599516 ntpd[2061]: Listen normally on 13 cali16a61cfc1f8 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:20:05.599557 ntpd[2061]: Listen normally on 14 cali61e7eb10fd9 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:20:05.599599 ntpd[2061]: Listen normally on 15 vxlan.calico [fe80::6482:21ff:fecf:b241%12]:123 Apr 13 20:20:06.868424 containerd[2111]: time="2026-04-13T20:20:06.868368037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:06.870598 containerd[2111]: time="2026-04-13T20:20:06.870399119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:20:06.873847 containerd[2111]: time="2026-04-13T20:20:06.872858211Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:06.881481 containerd[2111]: time="2026-04-13T20:20:06.881405969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:06.882430 containerd[2111]: time="2026-04-13T20:20:06.882387568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.736126942s" Apr 13 20:20:06.882552 containerd[2111]: time="2026-04-13T20:20:06.882437421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:20:06.884535 containerd[2111]: time="2026-04-13T20:20:06.884280693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:20:06.890846 containerd[2111]: time="2026-04-13T20:20:06.890801307Z" level=info msg="CreateContainer within sandbox \"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:20:06.968067 containerd[2111]: time="2026-04-13T20:20:06.968009193Z" level=info msg="CreateContainer within sandbox \"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9642188f31a7cef02180cc77eaacb345f900d1539f7df16c0837dcbfeffea923\"" Apr 13 20:20:06.969263 containerd[2111]: time="2026-04-13T20:20:06.969210858Z" level=info msg="StartContainer for \"9642188f31a7cef02180cc77eaacb345f900d1539f7df16c0837dcbfeffea923\"" Apr 13 20:20:07.051270 containerd[2111]: time="2026-04-13T20:20:07.051207321Z" level=info msg="StartContainer for \"9642188f31a7cef02180cc77eaacb345f900d1539f7df16c0837dcbfeffea923\" returns successfully" Apr 13 20:20:10.001569 containerd[2111]: time="2026-04-13T20:20:10.001521286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:10.003919 containerd[2111]: time="2026-04-13T20:20:10.003827057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:20:10.018988 containerd[2111]: time="2026-04-13T20:20:10.018912789Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:10.023710 containerd[2111]: time="2026-04-13T20:20:10.023645315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:10.024131 containerd[2111]: time="2026-04-13T20:20:10.024093139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.13977862s" Apr 13 20:20:10.024131 containerd[2111]: time="2026-04-13T20:20:10.024130964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:20:10.025656 containerd[2111]: time="2026-04-13T20:20:10.025615177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:20:10.039570 containerd[2111]: time="2026-04-13T20:20:10.039526362Z" level=info msg="CreateContainer within sandbox \"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:20:10.075736 containerd[2111]: time="2026-04-13T20:20:10.075684698Z" level=info msg="CreateContainer within sandbox \"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1eaad58edc031a843f23b04d0c888c405c25d547a69ec02bc2a9584e8afc684c\"" Apr 13 20:20:10.077339 containerd[2111]: time="2026-04-13T20:20:10.076368402Z" level=info msg="StartContainer for \"1eaad58edc031a843f23b04d0c888c405c25d547a69ec02bc2a9584e8afc684c\"" Apr 13 20:20:10.215475 containerd[2111]: time="2026-04-13T20:20:10.215273999Z" level=info msg="StartContainer for \"1eaad58edc031a843f23b04d0c888c405c25d547a69ec02bc2a9584e8afc684c\" returns successfully" Apr 13 20:20:10.577260 kubelet[3399]: I0413 20:20:10.577033 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5944887d59-hnxzb" podStartSLOduration=25.278904889 podStartE2EDuration="35.57700531s" podCreationTimestamp="2026-04-13 20:19:35 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.727355929 +0000 UTC m=+45.499099081" lastFinishedPulling="2026-04-13 20:20:10.025456338 +0000 UTC m=+55.797199502" observedRunningTime="2026-04-13 20:20:10.533305931 +0000 UTC m=+56.305049094" watchObservedRunningTime="2026-04-13 20:20:10.57700531 +0000 UTC m=+56.348748468" Apr 13 20:20:11.217587 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:11.217632 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:11.220163 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:11.483491 kubelet[3399]: I0413 20:20:11.483243 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:20:13.265581 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:13.267918 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:13.265609 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:13.805929 containerd[2111]: time="2026-04-13T20:20:13.805872769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:13.808152 containerd[2111]: time="2026-04-13T20:20:13.808025226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:20:13.811234 containerd[2111]: time="2026-04-13T20:20:13.811057966Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:13.816862 containerd[2111]: time="2026-04-13T20:20:13.816813104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:13.818755 containerd[2111]: time="2026-04-13T20:20:13.818583485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.792807019s" Apr 13 20:20:13.818755 containerd[2111]: time="2026-04-13T20:20:13.818633761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:20:13.901345 containerd[2111]: time="2026-04-13T20:20:13.901195701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:20:14.075130 containerd[2111]: time="2026-04-13T20:20:14.075007031Z" level=info msg="CreateContainer within sandbox \"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:20:14.101318 containerd[2111]: time="2026-04-13T20:20:14.101257557Z" level=info msg="CreateContainer within sandbox \"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ae0f7c518b379df55ba512303578e845828fddb35e6c98236c06d0a2519b987\"" Apr 13 20:20:14.111208 containerd[2111]: time="2026-04-13T20:20:14.111170458Z" level=info msg="StartContainer for \"5ae0f7c518b379df55ba512303578e845828fddb35e6c98236c06d0a2519b987\"" Apr 13 20:20:14.347782 containerd[2111]: time="2026-04-13T20:20:14.347135413Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:14.351333 containerd[2111]: time="2026-04-13T20:20:14.351263473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:20:14.359324 containerd[2111]: time="2026-04-13T20:20:14.359206448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 457.95556ms" Apr 13 20:20:14.360076 containerd[2111]: time="2026-04-13T20:20:14.359330172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:20:14.361394 containerd[2111]: time="2026-04-13T20:20:14.360647375Z" level=info msg="StartContainer for \"5ae0f7c518b379df55ba512303578e845828fddb35e6c98236c06d0a2519b987\" returns successfully" Apr 13 20:20:14.564092 containerd[2111]: time="2026-04-13T20:20:14.562913199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:20:14.606876 containerd[2111]: time="2026-04-13T20:20:14.606217573Z" level=info msg="CreateContainer within sandbox \"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:20:14.695782 containerd[2111]: time="2026-04-13T20:20:14.695733041Z" level=info msg="CreateContainer within sandbox \"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f3012ccbd00104317a24e9d38710f0c5b91e08bfa6b60c02358eab4991d737b\"" Apr 13 20:20:14.787412 containerd[2111]: time="2026-04-13T20:20:14.786754724Z" level=info msg="StartContainer for \"1f3012ccbd00104317a24e9d38710f0c5b91e08bfa6b60c02358eab4991d737b\"" Apr 13 20:20:14.889420 containerd[2111]: time="2026-04-13T20:20:14.889256239Z" level=info msg="StopPodSandbox for \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\"" Apr 13 20:20:15.077228 containerd[2111]: time="2026-04-13T20:20:15.076808702Z" level=info msg="StartContainer for \"1f3012ccbd00104317a24e9d38710f0c5b91e08bfa6b60c02358eab4991d737b\" returns successfully" Apr 13 20:20:15.170514 kubelet[3399]: I0413 20:20:15.140060 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b846df46d-nfrl6" podStartSLOduration=25.181923726 podStartE2EDuration="39.07878142s" podCreationTimestamp="2026-04-13 20:19:36 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.956195661 +0000 UTC m=+45.727938803" lastFinishedPulling="2026-04-13 20:20:13.853053342 +0000 UTC m=+59.624796497" observedRunningTime="2026-04-13 20:20:15.065912182 +0000 UTC m=+60.837655343" watchObservedRunningTime="2026-04-13 20:20:15.07878142 +0000 UTC m=+60.850524583" Apr 13 20:20:15.316640 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:15.313924 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:15.313947 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:16.419376 kubelet[3399]: I0413 20:20:16.418129 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5944887d59-qk82w" podStartSLOduration=26.993848654 podStartE2EDuration="41.418099868s" podCreationTimestamp="2026-04-13 20:19:35 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.994719266 +0000 UTC m=+45.766462426" lastFinishedPulling="2026-04-13 20:20:14.41897049 +0000 UTC m=+60.190713640" observedRunningTime="2026-04-13 20:20:16.024716606 +0000 UTC m=+61.796459767" watchObservedRunningTime="2026-04-13 20:20:16.418099868 +0000 UTC m=+62.189843029" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:15.762 [WARNING][6159] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e7110801-c02b-4008-8b00-b210e85462f6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e", Pod:"goldmane-5b85766d88-6p8z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie59edc271c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:15.765 [INFO][6159] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:15.766 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" iface="eth0" netns="" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:15.766 [INFO][6159] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:15.766 [INFO][6159] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.392 [INFO][6167] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.399 [INFO][6167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.399 [INFO][6167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.423 [WARNING][6167] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.423 [INFO][6167] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.426 [INFO][6167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:16.444262 containerd[2111]: 2026-04-13 20:20:16.434 [INFO][6159] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:16.486595 containerd[2111]: time="2026-04-13T20:20:16.484205518Z" level=info msg="TearDown network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" successfully" Apr 13 20:20:16.486595 containerd[2111]: time="2026-04-13T20:20:16.484264378Z" level=info msg="StopPodSandbox for \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" returns successfully" Apr 13 20:20:16.986053 containerd[2111]: time="2026-04-13T20:20:16.986004165Z" level=info msg="RemovePodSandbox for \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\"" Apr 13 20:20:16.993363 containerd[2111]: time="2026-04-13T20:20:16.993319195Z" level=info msg="Forcibly stopping sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\"" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.194 [WARNING][6206] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"e7110801-c02b-4008-8b00-b210e85462f6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"9105ca1ba3d6278ffae7c249c3532b2d55f3c658bc17fc67750acafa2d98f98e", Pod:"goldmane-5b85766d88-6p8z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie59edc271c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.195 [INFO][6206] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.195 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" iface="eth0" netns="" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.195 [INFO][6206] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.195 [INFO][6206] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.273 [INFO][6214] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.273 [INFO][6214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.273 [INFO][6214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.283 [WARNING][6214] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.283 [INFO][6214] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" HandleID="k8s-pod-network.7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Workload="ip--172--31--17--28-k8s-goldmane--5b85766d88--6p8z6-eth0" Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.288 [INFO][6214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:17.301690 containerd[2111]: 2026-04-13 20:20:17.294 [INFO][6206] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799" Apr 13 20:20:17.301690 containerd[2111]: time="2026-04-13T20:20:17.301666008Z" level=info msg="TearDown network for sandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" successfully" Apr 13 20:20:17.361731 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:17.364596 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:17.361768 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:17.372940 containerd[2111]: time="2026-04-13T20:20:17.372882914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:17.382899 containerd[2111]: time="2026-04-13T20:20:17.382772097Z" level=info msg="RemovePodSandbox \"7cb88e1ce40cdad72866b0738850b372a681ecd812f8bd20ec07c6910f2d7799\" returns successfully" Apr 13 20:20:17.400353 containerd[2111]: time="2026-04-13T20:20:17.400164822Z" level=info msg="StopPodSandbox for \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\"" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.478 [WARNING][6229] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6a5c10c-7432-4751-8918-4251f504fa44", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321", Pod:"csi-node-driver-bs499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76cdf3a7a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.479 [INFO][6229] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.479 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" iface="eth0" netns="" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.479 [INFO][6229] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.479 [INFO][6229] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.558 [INFO][6236] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.558 [INFO][6236] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.558 [INFO][6236] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.575 [WARNING][6236] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.575 [INFO][6236] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.577 [INFO][6236] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:17.590829 containerd[2111]: 2026-04-13 20:20:17.582 [INFO][6229] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.590829 containerd[2111]: time="2026-04-13T20:20:17.590614899Z" level=info msg="TearDown network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" successfully" Apr 13 20:20:17.590829 containerd[2111]: time="2026-04-13T20:20:17.590647562Z" level=info msg="StopPodSandbox for \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" returns successfully" Apr 13 20:20:17.597062 containerd[2111]: time="2026-04-13T20:20:17.592935812Z" level=info msg="RemovePodSandbox for \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\"" Apr 13 20:20:17.597062 containerd[2111]: time="2026-04-13T20:20:17.592973489Z" level=info msg="Forcibly stopping sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\"" Apr 13 20:20:17.693995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943185045.mount: Deactivated successfully. Apr 13 20:20:17.793020 containerd[2111]: time="2026-04-13T20:20:17.792965638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:17.796135 containerd[2111]: time="2026-04-13T20:20:17.796069697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:20:17.803463 containerd[2111]: time="2026-04-13T20:20:17.802505082Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.702 [WARNING][6250] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6a5c10c-7432-4751-8918-4251f504fa44", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321", Pod:"csi-node-driver-bs499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76cdf3a7a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.703 [INFO][6250] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.703 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" iface="eth0" netns="" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.703 [INFO][6250] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.703 [INFO][6250] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.784 [INFO][6257] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.784 [INFO][6257] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.784 [INFO][6257] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.794 [WARNING][6257] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.795 [INFO][6257] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" HandleID="k8s-pod-network.513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Workload="ip--172--31--17--28-k8s-csi--node--driver--bs499-eth0" Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.798 [INFO][6257] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:17.804300 containerd[2111]: 2026-04-13 20:20:17.801 [INFO][6250] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00" Apr 13 20:20:17.805536 containerd[2111]: time="2026-04-13T20:20:17.804333219Z" level=info msg="TearDown network for sandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" successfully" Apr 13 20:20:17.810722 containerd[2111]: time="2026-04-13T20:20:17.810684185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:17.812408 containerd[2111]: time="2026-04-13T20:20:17.812354597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.249393556s" Apr 13 20:20:17.812758 containerd[2111]: time="2026-04-13T20:20:17.812525610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:20:17.832621 containerd[2111]: time="2026-04-13T20:20:17.832572836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:17.833011 containerd[2111]: time="2026-04-13T20:20:17.832881441Z" level=info msg="RemovePodSandbox \"513ae45c013c609e928e82ada482c630c36be2d24bd5342d5932fae4966aae00\" returns successfully" Apr 13 20:20:17.833616 containerd[2111]: time="2026-04-13T20:20:17.833580580Z" level=info msg="StopPodSandbox for \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\"" Apr 13 20:20:17.875597 containerd[2111]: time="2026-04-13T20:20:17.875473249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.896 [WARNING][6279] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0", GenerateName:"calico-kube-controllers-6b846df46d-", Namespace:"calico-system", SelfLink:"", UID:"2fb4a49c-3535-4abb-96bd-a03060aeb7aa", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b846df46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d", Pod:"calico-kube-controllers-6b846df46d-nfrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a61cfc1f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.896 [INFO][6279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.896 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" iface="eth0" netns="" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.897 [INFO][6279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.897 [INFO][6279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.961 [INFO][6287] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.961 [INFO][6287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.963 [INFO][6287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.974 [WARNING][6287] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.974 [INFO][6287] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.975 [INFO][6287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:17.981537 containerd[2111]: 2026-04-13 20:20:17.978 [INFO][6279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:17.982731 containerd[2111]: time="2026-04-13T20:20:17.981579320Z" level=info msg="TearDown network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" successfully" Apr 13 20:20:17.982731 containerd[2111]: time="2026-04-13T20:20:17.981608170Z" level=info msg="StopPodSandbox for \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" returns successfully" Apr 13 20:20:17.983106 containerd[2111]: time="2026-04-13T20:20:17.983055179Z" level=info msg="RemovePodSandbox for \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\"" Apr 13 20:20:17.983278 containerd[2111]: time="2026-04-13T20:20:17.983114203Z" level=info msg="Forcibly stopping sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\"" Apr 13 20:20:18.005885 containerd[2111]: time="2026-04-13T20:20:18.005778813Z" level=info msg="CreateContainer within sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:20:18.046625 containerd[2111]: time="2026-04-13T20:20:18.046472429Z" level=info msg="CreateContainer within sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\"" Apr 13 20:20:18.074951 containerd[2111]: time="2026-04-13T20:20:18.074562538Z" level=info msg="StartContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\"" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.067 [WARNING][6302] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0", GenerateName:"calico-kube-controllers-6b846df46d-", Namespace:"calico-system", SelfLink:"", UID:"2fb4a49c-3535-4abb-96bd-a03060aeb7aa", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b846df46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"f81503af21f0ee9d2dbc7569aea144e11f8514733f93fe3b0eda162d6482a26d", Pod:"calico-kube-controllers-6b846df46d-nfrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a61cfc1f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.067 [INFO][6302] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.067 [INFO][6302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" iface="eth0" netns="" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.067 [INFO][6302] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.068 [INFO][6302] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.117 [INFO][6309] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.118 [INFO][6309] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.118 [INFO][6309] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.133 [WARNING][6309] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.133 [INFO][6309] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" HandleID="k8s-pod-network.1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--6b846df46d--nfrl6-eth0" Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.135 [INFO][6309] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.142229 containerd[2111]: 2026-04-13 20:20:18.138 [INFO][6302] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d" Apr 13 20:20:18.142229 containerd[2111]: time="2026-04-13T20:20:18.141894662Z" level=info msg="TearDown network for sandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" successfully" Apr 13 20:20:18.156483 containerd[2111]: time="2026-04-13T20:20:18.156433208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:18.156634 containerd[2111]: time="2026-04-13T20:20:18.156528380Z" level=info msg="RemovePodSandbox \"1bafd82070e8022e2d8b94a57cf70508ae2331cbca124551ee37e77a39e63b4d\" returns successfully" Apr 13 20:20:18.158327 containerd[2111]: time="2026-04-13T20:20:18.158288820Z" level=info msg="StopPodSandbox for \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\"" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.229 [WARNING][6325] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0b011b-1723-449b-9203-e921c49b3890", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2", Pod:"coredns-674b8bbfcf-497fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5efbe8fe73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.229 [INFO][6325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.229 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" iface="eth0" netns="" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.229 [INFO][6325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.229 [INFO][6325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.285 [INFO][6336] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.286 [INFO][6336] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.286 [INFO][6336] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.301 [WARNING][6336] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.301 [INFO][6336] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.304 [INFO][6336] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.311636 containerd[2111]: 2026-04-13 20:20:18.306 [INFO][6325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.311636 containerd[2111]: time="2026-04-13T20:20:18.311338057Z" level=info msg="TearDown network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" successfully" Apr 13 20:20:18.311636 containerd[2111]: time="2026-04-13T20:20:18.311367182Z" level=info msg="StopPodSandbox for \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" returns successfully" Apr 13 20:20:18.313412 containerd[2111]: time="2026-04-13T20:20:18.311882182Z" level=info msg="RemovePodSandbox for \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\"" Apr 13 20:20:18.313412 containerd[2111]: time="2026-04-13T20:20:18.311914781Z" level=info msg="Forcibly stopping sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\"" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.434 [WARNING][6350] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0b011b-1723-449b-9203-e921c49b3890", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"3689f8feacba5b36d8f20cc01708c06039d03f02b2e08c47fdcc1810cc7171f2", Pod:"coredns-674b8bbfcf-497fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5efbe8fe73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.435 [INFO][6350] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.435 [INFO][6350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" iface="eth0" netns="" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.436 [INFO][6350] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.436 [INFO][6350] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.498 [INFO][6362] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.498 [INFO][6362] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.498 [INFO][6362] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.532 [WARNING][6362] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.532 [INFO][6362] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" HandleID="k8s-pod-network.d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--497fh-eth0" Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.536 [INFO][6362] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.572600 containerd[2111]: 2026-04-13 20:20:18.565 [INFO][6350] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02" Apr 13 20:20:18.572600 containerd[2111]: time="2026-04-13T20:20:18.571357747Z" level=info msg="TearDown network for sandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" successfully" Apr 13 20:20:18.631648 containerd[2111]: time="2026-04-13T20:20:18.631450385Z" level=info msg="StartContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" returns successfully" Apr 13 20:20:18.670956 containerd[2111]: time="2026-04-13T20:20:18.670905082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:18.671403 containerd[2111]: time="2026-04-13T20:20:18.671227298Z" level=info msg="RemovePodSandbox \"d92c82feb30b082695217625304fb323d38807629607112a1f324adbb98b3d02\" returns successfully" Apr 13 20:20:18.672181 containerd[2111]: time="2026-04-13T20:20:18.671999483Z" level=info msg="StopPodSandbox for \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\"" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.747 [WARNING][6402] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071cac38-3892-4c51-a239-a556805da745", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a", Pod:"coredns-674b8bbfcf-xqfg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafc92d849ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.748 [INFO][6402] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.748 [INFO][6402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" iface="eth0" netns="" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.748 [INFO][6402] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.748 [INFO][6402] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.779 [INFO][6409] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.779 [INFO][6409] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.779 [INFO][6409] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.786 [WARNING][6409] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.786 [INFO][6409] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.788 [INFO][6409] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.792948 containerd[2111]: 2026-04-13 20:20:18.790 [INFO][6402] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.792948 containerd[2111]: time="2026-04-13T20:20:18.792823910Z" level=info msg="TearDown network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" successfully" Apr 13 20:20:18.792948 containerd[2111]: time="2026-04-13T20:20:18.792847267Z" level=info msg="StopPodSandbox for \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" returns successfully" Apr 13 20:20:18.795466 containerd[2111]: time="2026-04-13T20:20:18.793733858Z" level=info msg="RemovePodSandbox for \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\"" Apr 13 20:20:18.795466 containerd[2111]: time="2026-04-13T20:20:18.793761700Z" level=info msg="Forcibly stopping sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\"" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.838 [WARNING][6424] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071cac38-3892-4c51-a239-a556805da745", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e683a96a0e2fcd41473d042edb4881f14fe5ca88d3cb95c54670f9ca74729a1a", Pod:"coredns-674b8bbfcf-xqfg2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafc92d849ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.838 [INFO][6424] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.838 [INFO][6424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" iface="eth0" netns="" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.838 [INFO][6424] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.838 [INFO][6424] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.869 [INFO][6432] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.869 [INFO][6432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.869 [INFO][6432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.876 [WARNING][6432] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.876 [INFO][6432] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" HandleID="k8s-pod-network.acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Workload="ip--172--31--17--28-k8s-coredns--674b8bbfcf--xqfg2-eth0" Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.878 [INFO][6432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.884377 containerd[2111]: 2026-04-13 20:20:18.880 [INFO][6424] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9" Apr 13 20:20:18.884377 containerd[2111]: time="2026-04-13T20:20:18.882921057Z" level=info msg="TearDown network for sandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" successfully" Apr 13 20:20:18.891420 containerd[2111]: time="2026-04-13T20:20:18.891227294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:18.891420 containerd[2111]: time="2026-04-13T20:20:18.891316547Z" level=info msg="RemovePodSandbox \"acb984f5a459242394d89f9c2d928aaa1b5cfb6e25bcac4b93d8655d566cc7b9\" returns successfully" Apr 13 20:20:18.892263 containerd[2111]: time="2026-04-13T20:20:18.891897115Z" level=info msg="StopPodSandbox for \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\"" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.937 [WARNING][6446] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"82c2fe9b-a341-45a1-998b-bc57b78f0096", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb", Pod:"calico-apiserver-5944887d59-qk82w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e7eb10fd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.938 [INFO][6446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.938 [INFO][6446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" iface="eth0" netns="" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.938 [INFO][6446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.938 [INFO][6446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.970 [INFO][6453] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.970 [INFO][6453] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.970 [INFO][6453] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.976 [WARNING][6453] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.976 [INFO][6453] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.979 [INFO][6453] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:18.983868 containerd[2111]: 2026-04-13 20:20:18.982 [INFO][6446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:18.984998 containerd[2111]: time="2026-04-13T20:20:18.983904817Z" level=info msg="TearDown network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" successfully" Apr 13 20:20:18.984998 containerd[2111]: time="2026-04-13T20:20:18.983933118Z" level=info msg="StopPodSandbox for \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" returns successfully" Apr 13 20:20:18.984998 containerd[2111]: time="2026-04-13T20:20:18.984653309Z" level=info msg="RemovePodSandbox for \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\"" Apr 13 20:20:18.984998 containerd[2111]: time="2026-04-13T20:20:18.984692068Z" level=info msg="Forcibly stopping sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\"" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.050 [WARNING][6467] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"82c2fe9b-a341-45a1-998b-bc57b78f0096", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"037e579ea7db9642be18eb4555c51693463c2a469d4eeb874263ad9edddccccb", Pod:"calico-apiserver-5944887d59-qk82w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e7eb10fd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.050 [INFO][6467] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.050 [INFO][6467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" iface="eth0" netns="" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.050 [INFO][6467] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.050 [INFO][6467] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.094 [INFO][6474] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.094 [INFO][6474] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.094 [INFO][6474] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.104 [WARNING][6474] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.104 [INFO][6474] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" HandleID="k8s-pod-network.f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--qk82w-eth0" Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.107 [INFO][6474] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:19.114358 containerd[2111]: 2026-04-13 20:20:19.110 [INFO][6467] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5" Apr 13 20:20:19.118029 containerd[2111]: time="2026-04-13T20:20:19.114381137Z" level=info msg="TearDown network for sandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" successfully" Apr 13 20:20:19.151016 containerd[2111]: time="2026-04-13T20:20:19.150971168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:19.151194 containerd[2111]: time="2026-04-13T20:20:19.151107889Z" level=info msg="RemovePodSandbox \"f32965ce0bfc306f7fcbad9c8c6f191f389fa9b2b02a40ef7e3b6103fba8e5b5\" returns successfully" Apr 13 20:20:19.158252 containerd[2111]: time="2026-04-13T20:20:19.157558137Z" level=info msg="StopPodSandbox for \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\"" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.234 [WARNING][6488] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2", Pod:"calico-apiserver-5944887d59-hnxzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf4b1ea02b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.234 [INFO][6488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.234 [INFO][6488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" iface="eth0" netns="" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.234 [INFO][6488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.234 [INFO][6488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.293 [INFO][6496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.293 [INFO][6496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.293 [INFO][6496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.309 [WARNING][6496] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.309 [INFO][6496] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.312 [INFO][6496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:19.318655 containerd[2111]: 2026-04-13 20:20:19.315 [INFO][6488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.319588 containerd[2111]: time="2026-04-13T20:20:19.319553279Z" level=info msg="TearDown network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" successfully" Apr 13 20:20:19.319691 containerd[2111]: time="2026-04-13T20:20:19.319674856Z" level=info msg="StopPodSandbox for \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" returns successfully" Apr 13 20:20:19.337344 containerd[2111]: time="2026-04-13T20:20:19.337291879Z" level=info msg="RemovePodSandbox for \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\"" Apr 13 20:20:19.337687 containerd[2111]: time="2026-04-13T20:20:19.337667323Z" level=info msg="Forcibly stopping sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\"" Apr 13 20:20:19.348691 containerd[2111]: time="2026-04-13T20:20:19.348484423Z" level=info msg="StopContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" with timeout 30 (s)" Apr 13 20:20:19.348856 containerd[2111]: time="2026-04-13T20:20:19.348730071Z" level=info msg="StopContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" with timeout 30 (s)" Apr 13 20:20:19.358096 containerd[2111]: time="2026-04-13T20:20:19.353956394Z" level=info msg="Stop container \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" with signal terminated" Apr 13 20:20:19.366457 containerd[2111]: time="2026-04-13T20:20:19.365784939Z" level=info msg="Stop container \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" with signal terminated" Apr 13 20:20:19.412637 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:19.409384 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:19.409420 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:19.587571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4-rootfs.mount: Deactivated successfully. Apr 13 20:20:19.609541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98-rootfs.mount: Deactivated successfully. Apr 13 20:20:19.641817 containerd[2111]: time="2026-04-13T20:20:19.605525049Z" level=info msg="shim disconnected" id=7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98 namespace=k8s.io Apr 13 20:20:19.658874 containerd[2111]: time="2026-04-13T20:20:19.620977026Z" level=info msg="shim disconnected" id=4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4 namespace=k8s.io Apr 13 20:20:19.658874 containerd[2111]: time="2026-04-13T20:20:19.658604566Z" level=warning msg="cleaning up after shim disconnected" id=4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4 namespace=k8s.io Apr 13 20:20:19.658874 containerd[2111]: time="2026-04-13T20:20:19.658626128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:19.675469 containerd[2111]: time="2026-04-13T20:20:19.674798665Z" level=warning msg="cleaning up after shim disconnected" id=7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98 namespace=k8s.io Apr 13 20:20:19.675469 containerd[2111]: time="2026-04-13T20:20:19.674835359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.615 [WARNING][6517] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0", GenerateName:"calico-apiserver-5944887d59-", Namespace:"calico-system", SelfLink:"", UID:"fc62823c-cfb3-45a8-ba7b-6b153dcc5bd6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5944887d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"d869f6316eac50e3395141e0ab9b4254f5ecec6681c257cacf1071467f10bbe2", Pod:"calico-apiserver-5944887d59-hnxzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf4b1ea02b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.615 [INFO][6517] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.615 [INFO][6517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" iface="eth0" netns="" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.615 [INFO][6517] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.615 [INFO][6517] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.718 [INFO][6557] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.718 [INFO][6557] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.718 [INFO][6557] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.733 [WARNING][6557] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.733 [INFO][6557] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" HandleID="k8s-pod-network.e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Workload="ip--172--31--17--28-k8s-calico--apiserver--5944887d59--hnxzb-eth0" Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.736 [INFO][6557] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:19.761039 containerd[2111]: 2026-04-13 20:20:19.752 [INFO][6517] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802" Apr 13 20:20:19.761039 containerd[2111]: time="2026-04-13T20:20:19.760611756Z" level=info msg="TearDown network for sandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" successfully" Apr 13 20:20:19.784360 containerd[2111]: time="2026-04-13T20:20:19.784311958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:19.788820 containerd[2111]: time="2026-04-13T20:20:19.788779709Z" level=info msg="RemovePodSandbox \"e98b096226240930cb694c960933e7c302314ead563959ba0eb5b699f88bb802\" returns successfully" Apr 13 20:20:19.789790 containerd[2111]: time="2026-04-13T20:20:19.789755304Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" Apr 13 20:20:19.790898 containerd[2111]: time="2026-04-13T20:20:19.790861876Z" level=info msg="StopContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" returns successfully" Apr 13 20:20:19.804809 containerd[2111]: time="2026-04-13T20:20:19.804644901Z" level=info msg="StopContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" returns successfully" Apr 13 20:20:19.805715 containerd[2111]: time="2026-04-13T20:20:19.805656679Z" level=info msg="StopPodSandbox for \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\"" Apr 13 20:20:19.810243 containerd[2111]: time="2026-04-13T20:20:19.810108134Z" level=info msg="Container to stop \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:20:19.810243 containerd[2111]: time="2026-04-13T20:20:19.810162121Z" level=info msg="Container to stop \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:20:19.820427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4-shm.mount: Deactivated successfully. Apr 13 20:20:19.928278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4-rootfs.mount: Deactivated successfully. Apr 13 20:20:19.930059 containerd[2111]: time="2026-04-13T20:20:19.928275571Z" level=info msg="shim disconnected" id=e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4 namespace=k8s.io Apr 13 20:20:19.930059 containerd[2111]: time="2026-04-13T20:20:19.928338643Z" level=warning msg="cleaning up after shim disconnected" id=e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4 namespace=k8s.io Apr 13 20:20:19.930059 containerd[2111]: time="2026-04-13T20:20:19.928350676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:19.948 [WARNING][6600] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0", GenerateName:"whisker-7f9f779ffd-", Namespace:"calico-system", SelfLink:"", UID:"36b65341-e0c3-462c-84cd-6efbb156c217", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9f779ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4", Pod:"whisker-7f9f779ffd-5rvbw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f76e4dd09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:19.950 [INFO][6600] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:19.950 [INFO][6600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" iface="eth0" netns="" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:19.950 [INFO][6600] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:19.950 [INFO][6600] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.045 [INFO][6627] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.045 [INFO][6627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.045 [INFO][6627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.060 [WARNING][6627] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.061 [INFO][6627] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.065 [INFO][6627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:20.092253 containerd[2111]: 2026-04-13 20:20:20.081 [INFO][6600] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.092253 containerd[2111]: time="2026-04-13T20:20:20.091884391Z" level=info msg="TearDown network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" successfully" Apr 13 20:20:20.095083 containerd[2111]: time="2026-04-13T20:20:20.091919169Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" returns successfully" Apr 13 20:20:20.095250 containerd[2111]: time="2026-04-13T20:20:20.095079784Z" level=info msg="RemovePodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" Apr 13 20:20:20.095250 containerd[2111]: time="2026-04-13T20:20:20.095115377Z" level=info msg="Forcibly stopping sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" Apr 13 20:20:20.254583 kubelet[3399]: I0413 20:20:20.253202 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f9f779ffd-5rvbw" podStartSLOduration=22.546630957 podStartE2EDuration="41.248626122s" podCreationTimestamp="2026-04-13 20:19:39 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.151859781 +0000 UTC m=+44.923602923" lastFinishedPulling="2026-04-13 20:20:17.853854926 +0000 UTC m=+63.625598088" observedRunningTime="2026-04-13 20:20:19.427992158 +0000 UTC m=+65.199735319" watchObservedRunningTime="2026-04-13 20:20:20.248626122 +0000 UTC m=+66.020369284" Apr 13 20:20:20.257581 systemd-networkd[1657]: cali76f76e4dd09: Link DOWN Apr 13 20:20:20.257587 systemd-networkd[1657]: cali76f76e4dd09: Lost carrier Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.237 [WARNING][6674] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0", GenerateName:"whisker-7f9f779ffd-", Namespace:"calico-system", SelfLink:"", UID:"36b65341-e0c3-462c-84cd-6efbb156c217", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f9f779ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4", Pod:"whisker-7f9f779ffd-5rvbw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76f76e4dd09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.237 [INFO][6674] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.237 [INFO][6674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" iface="eth0" netns="" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.238 [INFO][6674] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.238 [INFO][6674] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.371 [INFO][6695] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.371 [INFO][6695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.371 [INFO][6695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.393 [WARNING][6695] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.393 [INFO][6695] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" HandleID="k8s-pod-network.a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.398 [INFO][6695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:20.423735 containerd[2111]: 2026-04-13 20:20:20.407 [INFO][6674] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597" Apr 13 20:20:20.425445 containerd[2111]: time="2026-04-13T20:20:20.423785750Z" level=info msg="TearDown network for sandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" successfully" Apr 13 20:20:20.449213 containerd[2111]: time="2026-04-13T20:20:20.448317279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:20:20.451214 containerd[2111]: time="2026-04-13T20:20:20.449923351Z" level=info msg="RemovePodSandbox \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" returns successfully" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.247 [INFO][6683] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.248 [INFO][6683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" iface="eth0" netns="/var/run/netns/cni-f984a05d-78d6-4e8c-f199-b65b6335d2ec" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.249 [INFO][6683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" iface="eth0" netns="/var/run/netns/cni-f984a05d-78d6-4e8c-f199-b65b6335d2ec" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.273 [INFO][6683] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" after=25.335608ms iface="eth0" netns="/var/run/netns/cni-f984a05d-78d6-4e8c-f199-b65b6335d2ec" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.273 [INFO][6683] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.274 [INFO][6683] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.447 [INFO][6701] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.447 [INFO][6701] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.448 [INFO][6701] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.593 [INFO][6701] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.595 [INFO][6701] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.602 [INFO][6701] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:20.636979 containerd[2111]: 2026-04-13 20:20:20.611 [INFO][6683] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:20:20.636979 containerd[2111]: time="2026-04-13T20:20:20.635498972Z" level=info msg="TearDown network for sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" successfully" Apr 13 20:20:20.636979 containerd[2111]: time="2026-04-13T20:20:20.635532436Z" level=info msg="StopPodSandbox for \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" returns successfully" Apr 13 20:20:20.643966 systemd[1]: run-netns-cni\x2df984a05d\x2d78d6\x2d4e8c\x2df199\x2db65b6335d2ec.mount: Deactivated successfully. Apr 13 20:20:20.655791 containerd[2111]: time="2026-04-13T20:20:20.655742020Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\"" Apr 13 20:20:20.655791 containerd[2111]: time="2026-04-13T20:20:20.655790903Z" level=info msg="StopPodSandbox for \"a86683a750b37c2d4b140e57bfbbd3f35b4bc3ff186c2ef59cdac0fdc2cd9597\" returns successfully" Apr 13 20:20:20.696225 kubelet[3399]: I0413 20:20:20.695322 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:20:20.837333 containerd[2111]: time="2026-04-13T20:20:20.837277249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.839652 containerd[2111]: time="2026-04-13T20:20:20.839598663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:20:20.841616 containerd[2111]: time="2026-04-13T20:20:20.841496321Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.845952 containerd[2111]: time="2026-04-13T20:20:20.845894811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.847469 containerd[2111]: time="2026-04-13T20:20:20.846726020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.971207003s" Apr 13 20:20:20.847469 containerd[2111]: time="2026-04-13T20:20:20.846771378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:20:20.852519 kubelet[3399]: I0413 20:20:20.852473 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-nginx-config\") pod \"36b65341-e0c3-462c-84cd-6efbb156c217\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " Apr 13 20:20:20.852677 kubelet[3399]: I0413 20:20:20.852560 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-backend-key-pair\") pod \"36b65341-e0c3-462c-84cd-6efbb156c217\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " Apr 13 20:20:20.852677 kubelet[3399]: I0413 20:20:20.852614 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-ca-bundle\") pod \"36b65341-e0c3-462c-84cd-6efbb156c217\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " Apr 13 20:20:20.852677 kubelet[3399]: I0413 20:20:20.852643 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjmk\" (UniqueName: \"kubernetes.io/projected/36b65341-e0c3-462c-84cd-6efbb156c217-kube-api-access-xsjmk\") pod \"36b65341-e0c3-462c-84cd-6efbb156c217\" (UID: \"36b65341-e0c3-462c-84cd-6efbb156c217\") " Apr 13 20:20:20.889001 kubelet[3399]: I0413 20:20:20.885759 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "36b65341-e0c3-462c-84cd-6efbb156c217" (UID: "36b65341-e0c3-462c-84cd-6efbb156c217"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:20:20.889001 kubelet[3399]: I0413 20:20:20.884202 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "36b65341-e0c3-462c-84cd-6efbb156c217" (UID: "36b65341-e0c3-462c-84cd-6efbb156c217"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:20:20.917937 kubelet[3399]: I0413 20:20:20.917384 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "36b65341-e0c3-462c-84cd-6efbb156c217" (UID: "36b65341-e0c3-462c-84cd-6efbb156c217"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:20:20.919256 containerd[2111]: time="2026-04-13T20:20:20.919075033Z" level=info msg="CreateContainer within sandbox \"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:20:20.921832 systemd[1]: var-lib-kubelet-pods-36b65341\x2de0c3\x2d462c\x2d84cd\x2d6efbb156c217-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsjmk.mount: Deactivated successfully. Apr 13 20:20:20.922222 systemd[1]: var-lib-kubelet-pods-36b65341\x2de0c3\x2d462c\x2d84cd\x2d6efbb156c217-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:20:20.927366 kubelet[3399]: I0413 20:20:20.927253 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b65341-e0c3-462c-84cd-6efbb156c217-kube-api-access-xsjmk" (OuterVolumeSpecName: "kube-api-access-xsjmk") pod "36b65341-e0c3-462c-84cd-6efbb156c217" (UID: "36b65341-e0c3-462c-84cd-6efbb156c217"). InnerVolumeSpecName "kube-api-access-xsjmk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:20:20.974324 kubelet[3399]: I0413 20:20:20.974276 3399 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-nginx-config\") on node \"ip-172-31-17-28\" DevicePath \"\"" Apr 13 20:20:20.974479 kubelet[3399]: I0413 20:20:20.974346 3399 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-backend-key-pair\") on node \"ip-172-31-17-28\" DevicePath \"\"" Apr 13 20:20:20.974479 kubelet[3399]: I0413 20:20:20.974363 3399 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b65341-e0c3-462c-84cd-6efbb156c217-whisker-ca-bundle\") on node \"ip-172-31-17-28\" DevicePath \"\"" Apr 13 20:20:20.974479 kubelet[3399]: I0413 20:20:20.974377 3399 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xsjmk\" (UniqueName: \"kubernetes.io/projected/36b65341-e0c3-462c-84cd-6efbb156c217-kube-api-access-xsjmk\") on node \"ip-172-31-17-28\" DevicePath \"\"" Apr 13 20:20:20.975126 containerd[2111]: time="2026-04-13T20:20:20.975079477Z" level=info msg="CreateContainer within sandbox \"d3c6a12d2f951dc5b9ef7706b11d0d8bb239cd499aaa36d45f37d7f45460c321\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ee46f9894dd71e603ed4d81a47ac709eb5a4df15b608ed2aaea2622a1789ed8\"" Apr 13 20:20:20.977044 containerd[2111]: time="2026-04-13T20:20:20.975911926Z" level=info msg="StartContainer for \"5ee46f9894dd71e603ed4d81a47ac709eb5a4df15b608ed2aaea2622a1789ed8\"" Apr 13 20:20:21.103022 containerd[2111]: time="2026-04-13T20:20:21.102335194Z" level=info msg="StartContainer for \"5ee46f9894dd71e603ed4d81a47ac709eb5a4df15b608ed2aaea2622a1789ed8\" returns successfully" Apr 13 20:20:21.457380 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:21.457391 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:21.460193 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:21.779835 kubelet[3399]: I0413 20:20:21.779097 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bs499" podStartSLOduration=24.614024007 podStartE2EDuration="45.779073792s" podCreationTimestamp="2026-04-13 20:19:36 +0000 UTC" firstStartedPulling="2026-04-13 20:19:59.683277419 +0000 UTC m=+45.455020577" lastFinishedPulling="2026-04-13 20:20:20.848327224 +0000 UTC m=+66.620070362" observedRunningTime="2026-04-13 20:20:21.754845628 +0000 UTC m=+67.526588789" watchObservedRunningTime="2026-04-13 20:20:21.779073792 +0000 UTC m=+67.550816953" Apr 13 20:20:22.061287 kubelet[3399]: I0413 20:20:22.060910 3399 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:20:22.067315 kubelet[3399]: I0413 20:20:22.066076 3399 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:20:22.104085 kubelet[3399]: I0413 20:20:22.103569 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947h5\" (UniqueName: \"kubernetes.io/projected/198c56ec-b3a7-4232-94e7-f7cee1428d59-kube-api-access-947h5\") pod \"whisker-d5c7447f5-8fmtv\" (UID: \"198c56ec-b3a7-4232-94e7-f7cee1428d59\") " pod="calico-system/whisker-d5c7447f5-8fmtv" Apr 13 20:20:22.104085 kubelet[3399]: I0413 20:20:22.103678 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/198c56ec-b3a7-4232-94e7-f7cee1428d59-nginx-config\") pod \"whisker-d5c7447f5-8fmtv\" (UID: \"198c56ec-b3a7-4232-94e7-f7cee1428d59\") " pod="calico-system/whisker-d5c7447f5-8fmtv" Apr 13 20:20:22.104085 kubelet[3399]: I0413 20:20:22.103707 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/198c56ec-b3a7-4232-94e7-f7cee1428d59-whisker-backend-key-pair\") pod \"whisker-d5c7447f5-8fmtv\" (UID: \"198c56ec-b3a7-4232-94e7-f7cee1428d59\") " pod="calico-system/whisker-d5c7447f5-8fmtv" Apr 13 20:20:22.104085 kubelet[3399]: I0413 20:20:22.103731 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198c56ec-b3a7-4232-94e7-f7cee1428d59-whisker-ca-bundle\") pod \"whisker-d5c7447f5-8fmtv\" (UID: \"198c56ec-b3a7-4232-94e7-f7cee1428d59\") " pod="calico-system/whisker-d5c7447f5-8fmtv" Apr 13 20:20:22.363164 containerd[2111]: time="2026-04-13T20:20:22.363002799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d5c7447f5-8fmtv,Uid:198c56ec-b3a7-4232-94e7-f7cee1428d59,Namespace:calico-system,Attempt:0,}" Apr 13 20:20:22.470365 kubelet[3399]: I0413 20:20:22.469631 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b65341-e0c3-462c-84cd-6efbb156c217" path="/var/lib/kubelet/pods/36b65341-e0c3-462c-84cd-6efbb156c217/volumes" Apr 13 20:20:22.598012 ntpd[2061]: Deleting interface #7 cali76f76e4dd09, fe80::ecee:eeff:feee:eeee%4#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Apr 13 20:20:22.604614 ntpd[2061]: 13 Apr 20:20:22 ntpd[2061]: Deleting interface #7 cali76f76e4dd09, fe80::ecee:eeff:feee:eeee%4#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Apr 13 20:20:22.712312 (udev-worker)[6702]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:20:22.714006 systemd-networkd[1657]: calic367a0ef038: Link UP Apr 13 20:20:22.714439 systemd-networkd[1657]: calic367a0ef038: Gained carrier Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.593 [INFO][6784] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0 whisker-d5c7447f5- calico-system 198c56ec-b3a7-4232-94e7-f7cee1428d59 1084 0 2026-04-13 20:20:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d5c7447f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-28 whisker-d5c7447f5-8fmtv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic367a0ef038 [] [] }} ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.594 [INFO][6784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.636 [INFO][6797] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" HandleID="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Workload="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.647 [INFO][6797] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" HandleID="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Workload="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efa90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"whisker-d5c7447f5-8fmtv", "timestamp":"2026-04-13 20:20:22.636731102 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000321080)} Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.647 [INFO][6797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.647 [INFO][6797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.647 [INFO][6797] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.651 [INFO][6797] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.664 [INFO][6797] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.672 [INFO][6797] ipam/ipam.go 526: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.674 [INFO][6797] ipam/ipam.go 160: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.677 [INFO][6797] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.677 [INFO][6797] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.679 [INFO][6797] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.687 [INFO][6797] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.700 [INFO][6797] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.120.73/26] block=192.168.120.64/26 handle="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.700 [INFO][6797] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.120.73/26] handle="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" host="ip-172-31-17-28" Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.700 [INFO][6797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:20:22.755068 containerd[2111]: 2026-04-13 20:20:22.700 [INFO][6797] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.120.73/26] IPv6=[] ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" HandleID="k8s-pod-network.0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Workload="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.703 [INFO][6784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0", GenerateName:"whisker-d5c7447f5-", Namespace:"calico-system", SelfLink:"", UID:"198c56ec-b3a7-4232-94e7-f7cee1428d59", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 20, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d5c7447f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"whisker-d5c7447f5-8fmtv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic367a0ef038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.703 [INFO][6784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.73/32] ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.703 [INFO][6784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic367a0ef038 ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.714 [INFO][6784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.716 [INFO][6784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0", GenerateName:"whisker-d5c7447f5-", Namespace:"calico-system", SelfLink:"", UID:"198c56ec-b3a7-4232-94e7-f7cee1428d59", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 20, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d5c7447f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e", Pod:"whisker-d5c7447f5-8fmtv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic367a0ef038", MAC:"f6:a5:8a:5d:72:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:20:22.757340 containerd[2111]: 2026-04-13 20:20:22.739 [INFO][6784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e" Namespace="calico-system" Pod="whisker-d5c7447f5-8fmtv" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--d5c7447f5--8fmtv-eth0" Apr 13 20:20:22.821214 containerd[2111]: time="2026-04-13T20:20:22.820441350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:22.821214 containerd[2111]: time="2026-04-13T20:20:22.820523571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:22.821214 containerd[2111]: time="2026-04-13T20:20:22.820546566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:22.821214 containerd[2111]: time="2026-04-13T20:20:22.820675343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:22.926264 systemd[1]: run-containerd-runc-k8s.io-0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e-runc.xE1bJt.mount: Deactivated successfully. Apr 13 20:20:22.978301 containerd[2111]: time="2026-04-13T20:20:22.977660238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d5c7447f5-8fmtv,Uid:198c56ec-b3a7-4232-94e7-f7cee1428d59,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e\"" Apr 13 20:20:23.011583 containerd[2111]: time="2026-04-13T20:20:23.011431165Z" level=info msg="CreateContainer within sandbox \"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:20:23.046022 containerd[2111]: time="2026-04-13T20:20:23.045967601Z" level=info msg="CreateContainer within sandbox \"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e4e5ca9a3f324a630931ad23e1952908a592f9d791b3c3c95ab1a136ec3e2917\"" Apr 13 20:20:23.047709 containerd[2111]: time="2026-04-13T20:20:23.047635589Z" level=info msg="StartContainer for \"e4e5ca9a3f324a630931ad23e1952908a592f9d791b3c3c95ab1a136ec3e2917\"" Apr 13 20:20:23.144534 containerd[2111]: time="2026-04-13T20:20:23.144479085Z" level=info msg="StartContainer for \"e4e5ca9a3f324a630931ad23e1952908a592f9d791b3c3c95ab1a136ec3e2917\" returns successfully" Apr 13 20:20:23.152339 containerd[2111]: time="2026-04-13T20:20:23.152180027Z" level=info msg="CreateContainer within sandbox \"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:20:23.174766 containerd[2111]: time="2026-04-13T20:20:23.174711769Z" level=info msg="CreateContainer within sandbox \"0ea2f4de2f0d5a40756507a1bc9347da7f6e923fc112881c360e77736d43279e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9e0386a0a8f0c1ec25804c7d304d79ccfc516d2838aebaf954c0dffaf890896e\"" Apr 13 20:20:23.176313 containerd[2111]: time="2026-04-13T20:20:23.175388026Z" level=info msg="StartContainer for \"9e0386a0a8f0c1ec25804c7d304d79ccfc516d2838aebaf954c0dffaf890896e\"" Apr 13 20:20:23.280404 containerd[2111]: time="2026-04-13T20:20:23.280177403Z" level=info msg="StartContainer for \"9e0386a0a8f0c1ec25804c7d304d79ccfc516d2838aebaf954c0dffaf890896e\" returns successfully" Apr 13 20:20:23.505478 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:23.507334 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:23.505487 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:23.753166 kubelet[3399]: I0413 20:20:23.752390 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d5c7447f5-8fmtv" podStartSLOduration=2.75236467 podStartE2EDuration="2.75236467s" podCreationTimestamp="2026-04-13 20:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:23.750113822 +0000 UTC m=+69.521856982" watchObservedRunningTime="2026-04-13 20:20:23.75236467 +0000 UTC m=+69.524107834" Apr 13 20:20:23.895754 systemd-networkd[1657]: calic367a0ef038: Gained IPv6LL Apr 13 20:20:26.598098 ntpd[2061]: Listen normally on 16 calic367a0ef038 [fe80::ecee:eeff:feee:eeee%15]:123 Apr 13 20:20:26.598611 ntpd[2061]: 13 Apr 20:20:26 ntpd[2061]: Listen normally on 16 calic367a0ef038 [fe80::ecee:eeff:feee:eeee%15]:123 Apr 13 20:20:28.069317 systemd[1]: run-containerd-runc-k8s.io-cc4038175f9c0842f7b43eff43861bdeef9cc3275721244bb71d5c9dafdebd12-runc.mBEaVG.mount: Deactivated successfully. Apr 13 20:20:28.279916 systemd[1]: Started sshd@7-172.31.17.28:22-50.85.169.122:51054.service - OpenSSH per-connection server daemon (50.85.169.122:51054). Apr 13 20:20:29.201798 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:29.203424 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:29.201832 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:29.406043 sshd[6975]: Accepted publickey for core from 50.85.169.122 port 51054 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:29.411159 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:29.439083 systemd-logind[2083]: New session 8 of user core. Apr 13 20:20:29.448682 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:20:30.944183 sshd[6975]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:30.954473 systemd[1]: sshd@7-172.31.17.28:22-50.85.169.122:51054.service: Deactivated successfully. Apr 13 20:20:30.959506 systemd-logind[2083]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:20:30.961359 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:20:30.963902 systemd-logind[2083]: Removed session 8. Apr 13 20:20:36.109668 systemd[1]: Started sshd@8-172.31.17.28:22-50.85.169.122:42980.service - OpenSSH per-connection server daemon (50.85.169.122:42980). Apr 13 20:20:37.190180 sshd[7009]: Accepted publickey for core from 50.85.169.122 port 42980 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:37.197689 sshd[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:37.209615 systemd-logind[2083]: New session 9 of user core. Apr 13 20:20:37.214238 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:20:38.371334 sshd[7009]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:38.375662 systemd[1]: sshd@8-172.31.17.28:22-50.85.169.122:42980.service: Deactivated successfully. Apr 13 20:20:38.382596 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:20:38.384281 systemd-logind[2083]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:20:38.387060 systemd-logind[2083]: Removed session 9. Apr 13 20:20:43.559571 systemd[1]: Started sshd@9-172.31.17.28:22-50.85.169.122:55616.service - OpenSSH per-connection server daemon (50.85.169.122:55616). Apr 13 20:20:44.620815 sshd[7065]: Accepted publickey for core from 50.85.169.122 port 55616 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:44.622881 sshd[7065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:44.628917 systemd-logind[2083]: New session 10 of user core. Apr 13 20:20:44.633693 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:20:45.475014 sshd[7065]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:45.485713 systemd[1]: sshd@9-172.31.17.28:22-50.85.169.122:55616.service: Deactivated successfully. Apr 13 20:20:45.490471 systemd-logind[2083]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:20:45.491463 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:20:45.493353 systemd-logind[2083]: Removed session 10. Apr 13 20:20:47.684830 kubelet[3399]: I0413 20:20:47.680491 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:20:50.637708 systemd[1]: Started sshd@10-172.31.17.28:22-50.85.169.122:36654.service - OpenSSH per-connection server daemon (50.85.169.122:36654). Apr 13 20:20:51.699073 sshd[7122]: Accepted publickey for core from 50.85.169.122 port 36654 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:51.703090 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:51.710969 systemd-logind[2083]: New session 11 of user core. Apr 13 20:20:51.714531 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:20:53.202598 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:53.209071 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:53.202637 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:53.510890 sshd[7122]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:53.516490 systemd[1]: sshd@10-172.31.17.28:22-50.85.169.122:36654.service: Deactivated successfully. Apr 13 20:20:53.521518 systemd-logind[2083]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:20:53.522715 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:20:53.528735 systemd-logind[2083]: Removed session 11. Apr 13 20:20:53.671488 systemd[1]: Started sshd@11-172.31.17.28:22-50.85.169.122:36670.service - OpenSSH per-connection server daemon (50.85.169.122:36670). Apr 13 20:20:54.666457 sshd[7139]: Accepted publickey for core from 50.85.169.122 port 36670 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:54.667174 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:54.672680 systemd-logind[2083]: New session 12 of user core. Apr 13 20:20:54.679543 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:20:55.249577 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:20:55.249587 systemd-resolved[1989]: Flushed all caches. Apr 13 20:20:55.251232 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:20:55.527749 sshd[7139]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:55.538417 systemd[1]: sshd@11-172.31.17.28:22-50.85.169.122:36670.service: Deactivated successfully. Apr 13 20:20:55.545361 systemd-logind[2083]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:20:55.545971 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:20:55.549752 systemd-logind[2083]: Removed session 12. Apr 13 20:20:55.702026 systemd[1]: Started sshd@12-172.31.17.28:22-50.85.169.122:36674.service - OpenSSH per-connection server daemon (50.85.169.122:36674). Apr 13 20:20:56.711633 sshd[7151]: Accepted publickey for core from 50.85.169.122 port 36674 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:56.713503 sshd[7151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:56.718382 systemd-logind[2083]: New session 13 of user core. Apr 13 20:20:56.722431 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:20:57.508872 sshd[7151]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:57.512668 systemd[1]: sshd@12-172.31.17.28:22-50.85.169.122:36674.service: Deactivated successfully. Apr 13 20:20:57.519051 systemd-logind[2083]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:20:57.519936 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:20:57.521778 systemd-logind[2083]: Removed session 13. Apr 13 20:21:02.683319 systemd[1]: Started sshd@13-172.31.17.28:22-50.85.169.122:33652.service - OpenSSH per-connection server daemon (50.85.169.122:33652). Apr 13 20:21:03.748173 sshd[7185]: Accepted publickey for core from 50.85.169.122 port 33652 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:03.756889 sshd[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:03.763669 systemd-logind[2083]: New session 14 of user core. Apr 13 20:21:03.769897 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:21:05.038666 sshd[7185]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:05.042675 systemd[1]: sshd@13-172.31.17.28:22-50.85.169.122:33652.service: Deactivated successfully. Apr 13 20:21:05.049320 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:21:05.050482 systemd-logind[2083]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:21:05.051674 systemd-logind[2083]: Removed session 14. Apr 13 20:21:05.171113 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:05.171577 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:05.171168 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:05.196943 systemd[1]: Started sshd@14-172.31.17.28:22-50.85.169.122:33658.service - OpenSSH per-connection server daemon (50.85.169.122:33658). Apr 13 20:21:06.160260 sshd[7232]: Accepted publickey for core from 50.85.169.122 port 33658 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:06.162159 sshd[7232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:06.167981 systemd-logind[2083]: New session 15 of user core. Apr 13 20:21:06.172582 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:21:06.487855 systemd[1]: run-containerd-runc-k8s.io-c80c19c7c8670105e960dc6549061a516884fedcdd18c46e45f626b3259541d3-runc.tKi9wE.mount: Deactivated successfully. Apr 13 20:21:07.382157 sshd[7232]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:07.392135 systemd[1]: sshd@14-172.31.17.28:22-50.85.169.122:33658.service: Deactivated successfully. Apr 13 20:21:07.396671 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:21:07.398240 systemd-logind[2083]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:21:07.399736 systemd-logind[2083]: Removed session 15. Apr 13 20:21:07.545418 systemd[1]: Started sshd@15-172.31.17.28:22-50.85.169.122:33662.service - OpenSSH per-connection server daemon (50.85.169.122:33662). Apr 13 20:21:08.560820 sshd[7264]: Accepted publickey for core from 50.85.169.122 port 33662 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:08.570081 sshd[7264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:08.575542 systemd-logind[2083]: New session 16 of user core. Apr 13 20:21:08.580586 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:21:10.096110 sshd[7264]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:10.105132 systemd[1]: sshd@15-172.31.17.28:22-50.85.169.122:33662.service: Deactivated successfully. Apr 13 20:21:10.110618 systemd-logind[2083]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:21:10.110852 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:21:10.113797 systemd-logind[2083]: Removed session 16. Apr 13 20:21:10.255639 systemd[1]: Started sshd@16-172.31.17.28:22-50.85.169.122:57510.service - OpenSSH per-connection server daemon (50.85.169.122:57510). Apr 13 20:21:11.205201 sshd[7291]: Accepted publickey for core from 50.85.169.122 port 57510 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:11.207741 sshd[7291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:11.214067 systemd-logind[2083]: New session 17 of user core. Apr 13 20:21:11.218515 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:21:12.593334 sshd[7291]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:12.604708 systemd[1]: sshd@16-172.31.17.28:22-50.85.169.122:57510.service: Deactivated successfully. Apr 13 20:21:12.610770 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:21:12.611181 systemd-logind[2083]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:21:12.615074 systemd-logind[2083]: Removed session 17. Apr 13 20:21:12.772513 systemd[1]: Started sshd@17-172.31.17.28:22-50.85.169.122:57524.service - OpenSSH per-connection server daemon (50.85.169.122:57524). Apr 13 20:21:13.233256 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:13.235329 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:13.233411 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:13.816988 sshd[7303]: Accepted publickey for core from 50.85.169.122 port 57524 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:13.820238 sshd[7303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:13.824874 systemd-logind[2083]: New session 18 of user core. Apr 13 20:21:13.831519 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:21:14.783429 sshd[7303]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:14.791604 systemd[1]: sshd@17-172.31.17.28:22-50.85.169.122:57524.service: Deactivated successfully. Apr 13 20:21:14.803943 systemd-logind[2083]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:21:14.804472 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:21:14.809044 systemd-logind[2083]: Removed session 18. Apr 13 20:21:19.945414 systemd[1]: Started sshd@18-172.31.17.28:22-50.85.169.122:54986.service - OpenSSH per-connection server daemon (50.85.169.122:54986). Apr 13 20:21:20.589565 kubelet[3399]: I0413 20:21:20.583036 3399 scope.go:117] "RemoveContainer" containerID="7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98" Apr 13 20:21:20.832994 containerd[2111]: time="2026-04-13T20:21:20.810294517Z" level=info msg="RemoveContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\"" Apr 13 20:21:20.954099 sshd[7340]: Accepted publickey for core from 50.85.169.122 port 54986 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:20.962756 sshd[7340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:20.983555 containerd[2111]: time="2026-04-13T20:21:20.982685776Z" level=info msg="RemoveContainer for \"7195c730556ebf1c9e759fc8ec5634355dff12f7f8dc19cf27bb1974cc20ed98\" returns successfully" Apr 13 20:21:20.986066 systemd-logind[2083]: New session 19 of user core. Apr 13 20:21:20.989579 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:21:21.031215 kubelet[3399]: I0413 20:21:21.031180 3399 scope.go:117] "RemoveContainer" containerID="4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4" Apr 13 20:21:21.034506 containerd[2111]: time="2026-04-13T20:21:21.034471943Z" level=info msg="RemoveContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\"" Apr 13 20:21:21.041528 containerd[2111]: time="2026-04-13T20:21:21.041489806Z" level=info msg="RemoveContainer for \"4628df49b2642a667d7f598c95167590072874e79d8c177470f45f3c0829edc4\" returns successfully" Apr 13 20:21:21.043205 containerd[2111]: time="2026-04-13T20:21:21.043173792Z" level=info msg="StopPodSandbox for \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\"" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.566 [WARNING][7373] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.571 [INFO][7373] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.571 [INFO][7373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" iface="eth0" netns="" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.571 [INFO][7373] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.571 [INFO][7373] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.976 [INFO][7383] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.985 [INFO][7383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:21.986 [INFO][7383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:22.000 [WARNING][7383] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:22.000 [INFO][7383] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:22.002 [INFO][7383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:21:22.037515 containerd[2111]: 2026-04-13 20:21:22.007 [INFO][7373] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.114271 containerd[2111]: time="2026-04-13T20:21:22.113692529Z" level=info msg="TearDown network for sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" successfully" Apr 13 20:21:22.114486 containerd[2111]: time="2026-04-13T20:21:22.114457057Z" level=info msg="StopPodSandbox for \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" returns successfully" Apr 13 20:21:22.123925 containerd[2111]: time="2026-04-13T20:21:22.120572205Z" level=info msg="RemovePodSandbox for \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\"" Apr 13 20:21:22.128302 containerd[2111]: time="2026-04-13T20:21:22.128265486Z" level=info msg="Forcibly stopping sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\"" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.276 [WARNING][7410] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.276 [INFO][7410] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.276 [INFO][7410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" iface="eth0" netns="" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.276 [INFO][7410] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.276 [INFO][7410] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.328 [INFO][7420] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.328 [INFO][7420] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.328 [INFO][7420] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.338 [WARNING][7420] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.338 [INFO][7420] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" HandleID="k8s-pod-network.e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Workload="ip--172--31--17--28-k8s-whisker--7f9f779ffd--5rvbw-eth0" Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.342 [INFO][7420] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:21:22.353970 containerd[2111]: 2026-04-13 20:21:22.347 [INFO][7410] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4" Apr 13 20:21:22.353970 containerd[2111]: time="2026-04-13T20:21:22.353677151Z" level=info msg="TearDown network for sandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" successfully" Apr 13 20:21:22.397167 containerd[2111]: time="2026-04-13T20:21:22.396678463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:21:22.397167 containerd[2111]: time="2026-04-13T20:21:22.396831018Z" level=info msg="RemovePodSandbox \"e8391a434c37d61d37d5b0fe69ab4b34b4f02fd1c43c86afdd0fab01369ab3f4\" returns successfully" Apr 13 20:21:22.536062 sshd[7340]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:22.561674 systemd[1]: sshd@18-172.31.17.28:22-50.85.169.122:54986.service: Deactivated successfully. Apr 13 20:21:22.562974 systemd-logind[2083]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:21:22.568518 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:21:22.570242 systemd-logind[2083]: Removed session 19. Apr 13 20:21:23.217249 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:23.219491 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:23.217283 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:25.267249 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:25.265231 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:25.265240 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:27.698485 systemd[1]: Started sshd@19-172.31.17.28:22-50.85.169.122:54992.service - OpenSSH per-connection server daemon (50.85.169.122:54992). Apr 13 20:21:28.722936 sshd[7432]: Accepted publickey for core from 50.85.169.122 port 54992 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:28.729656 sshd[7432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:28.735648 systemd-logind[2083]: New session 20 of user core. Apr 13 20:21:28.738496 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:21:30.140413 sshd[7432]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:30.149034 systemd[1]: sshd@19-172.31.17.28:22-50.85.169.122:54992.service: Deactivated successfully. Apr 13 20:21:30.158007 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:21:30.160631 systemd-logind[2083]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:21:30.162546 systemd-logind[2083]: Removed session 20. Apr 13 20:21:31.217572 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:31.219345 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:31.217602 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:35.326587 systemd[1]: Started sshd@20-172.31.17.28:22-50.85.169.122:55088.service - OpenSSH per-connection server daemon (50.85.169.122:55088). Apr 13 20:21:36.403284 sshd[7481]: Accepted publickey for core from 50.85.169.122 port 55088 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:36.410448 sshd[7481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:36.416751 systemd-logind[2083]: New session 21 of user core. Apr 13 20:21:36.421521 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:21:36.553572 systemd[1]: run-containerd-runc-k8s.io-c80c19c7c8670105e960dc6549061a516884fedcdd18c46e45f626b3259541d3-runc.0zUfsH.mount: Deactivated successfully. Apr 13 20:21:37.233244 systemd-resolved[1989]: Under memory pressure, flushing caches. Apr 13 20:21:37.235420 systemd-journald[1573]: Under memory pressure, flushing caches. Apr 13 20:21:37.233278 systemd-resolved[1989]: Flushed all caches. Apr 13 20:21:37.867056 sshd[7481]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:37.872719 systemd[1]: sshd@20-172.31.17.28:22-50.85.169.122:55088.service: Deactivated successfully. Apr 13 20:21:37.873615 systemd-logind[2083]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:21:37.877346 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:21:37.878220 systemd-logind[2083]: Removed session 21. Apr 13 20:21:52.413322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5-rootfs.mount: Deactivated successfully. Apr 13 20:21:52.447297 containerd[2111]: time="2026-04-13T20:21:52.445820076Z" level=info msg="shim disconnected" id=095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5 namespace=k8s.io Apr 13 20:21:52.452221 containerd[2111]: time="2026-04-13T20:21:52.447702677Z" level=warning msg="cleaning up after shim disconnected" id=095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5 namespace=k8s.io Apr 13 20:21:52.452221 containerd[2111]: time="2026-04-13T20:21:52.447726297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:21:53.275539 containerd[2111]: time="2026-04-13T20:21:53.275470487Z" level=info msg="shim disconnected" id=8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310 namespace=k8s.io Apr 13 20:21:53.275807 containerd[2111]: time="2026-04-13T20:21:53.275540574Z" level=warning msg="cleaning up after shim disconnected" id=8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310 namespace=k8s.io Apr 13 20:21:53.275807 containerd[2111]: time="2026-04-13T20:21:53.275553477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:21:53.282448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310-rootfs.mount: Deactivated successfully. Apr 13 20:21:53.412779 kubelet[3399]: I0413 20:21:53.407791 3399 scope.go:117] "RemoveContainer" containerID="095b7c224f354d2c1ffb8527a9a67392c86d9f6220f2cf8de708679b0b6505d5" Apr 13 20:21:53.506709 containerd[2111]: time="2026-04-13T20:21:53.506645600Z" level=info msg="CreateContainer within sandbox \"3a63942a8f80ebf183dddd21ee0907fb0138302500577d616cd0d039256afbe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:21:53.618640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310206130.mount: Deactivated successfully. Apr 13 20:21:53.624544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272250762.mount: Deactivated successfully. Apr 13 20:21:53.625890 containerd[2111]: time="2026-04-13T20:21:53.625844474Z" level=info msg="CreateContainer within sandbox \"3a63942a8f80ebf183dddd21ee0907fb0138302500577d616cd0d039256afbe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f7afb70e8546cf4a6163f1346f6d29ba017c5632e6aac4eb422bc7a806425bf8\"" Apr 13 20:21:53.629542 containerd[2111]: time="2026-04-13T20:21:53.629503122Z" level=info msg="StartContainer for \"f7afb70e8546cf4a6163f1346f6d29ba017c5632e6aac4eb422bc7a806425bf8\"" Apr 13 20:21:53.758339 containerd[2111]: time="2026-04-13T20:21:53.758301816Z" level=info msg="StartContainer for \"f7afb70e8546cf4a6163f1346f6d29ba017c5632e6aac4eb422bc7a806425bf8\" returns successfully" Apr 13 20:21:54.346574 kubelet[3399]: I0413 20:21:54.346500 3399 scope.go:117] "RemoveContainer" containerID="8e41a9dd9705365974da981cf9bc39e05caa61717090be478e561a48eb334310" Apr 13 20:21:54.373915 containerd[2111]: time="2026-04-13T20:21:54.373731231Z" level=info msg="CreateContainer within sandbox \"8dd900dad073b757e513d30f05c8e6c7efbd54b8d2f9d58f59105e1ae3a372ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 20:21:54.398716 containerd[2111]: time="2026-04-13T20:21:54.398516267Z" level=info msg="CreateContainer within sandbox \"8dd900dad073b757e513d30f05c8e6c7efbd54b8d2f9d58f59105e1ae3a372ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6d226fddfd7484f9aa91529042eabce04ab36187cd12c3df26fcbe2fcb60919e\"" Apr 13 20:21:54.400209 containerd[2111]: time="2026-04-13T20:21:54.399365049Z" level=info msg="StartContainer for \"6d226fddfd7484f9aa91529042eabce04ab36187cd12c3df26fcbe2fcb60919e\"" Apr 13 20:21:54.496167 containerd[2111]: time="2026-04-13T20:21:54.496097327Z" level=info msg="StartContainer for \"6d226fddfd7484f9aa91529042eabce04ab36187cd12c3df26fcbe2fcb60919e\" returns successfully" Apr 13 20:21:57.097797 containerd[2111]: time="2026-04-13T20:21:57.097726972Z" level=info msg="shim disconnected" id=12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139 namespace=k8s.io Apr 13 20:21:57.097797 containerd[2111]: time="2026-04-13T20:21:57.097796440Z" level=warning msg="cleaning up after shim disconnected" id=12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139 namespace=k8s.io Apr 13 20:21:57.098533 containerd[2111]: time="2026-04-13T20:21:57.097808625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:21:57.104688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139-rootfs.mount: Deactivated successfully. Apr 13 20:21:57.116564 containerd[2111]: time="2026-04-13T20:21:57.116496213Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:21:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:21:57.361186 kubelet[3399]: I0413 20:21:57.360793 3399 scope.go:117] "RemoveContainer" containerID="12d1faa274419f1f4b9c2151fdc8c0ba75cf04385e1e452abc71ca3d21ebc139" Apr 13 20:21:57.372654 containerd[2111]: time="2026-04-13T20:21:57.371959836Z" level=info msg="CreateContainer within sandbox \"22ecf787dbb59bf4612b605d52024e581795975d6ffdcd256f709702058c2ef3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 20:21:57.400032 containerd[2111]: time="2026-04-13T20:21:57.399984373Z" level=info msg="CreateContainer within sandbox \"22ecf787dbb59bf4612b605d52024e581795975d6ffdcd256f709702058c2ef3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3b1ab70557bd5d56be86ec95890495b7d3040d1f04c76cf517a76862704e0f44\"" Apr 13 20:21:57.400671 containerd[2111]: time="2026-04-13T20:21:57.400632761Z" level=info msg="StartContainer for \"3b1ab70557bd5d56be86ec95890495b7d3040d1f04c76cf517a76862704e0f44\"" Apr 13 20:21:57.495198 containerd[2111]: time="2026-04-13T20:21:57.495089456Z" level=info msg="StartContainer for \"3b1ab70557bd5d56be86ec95890495b7d3040d1f04c76cf517a76862704e0f44\" returns successfully" Apr 13 20:21:58.286758 kubelet[3399]: E0413 20:21:58.285252 3399 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 20:22:03.069776 systemd[1]: run-containerd-runc-k8s.io-5ae0f7c518b379df55ba512303578e845828fddb35e6c98236c06d0a2519b987-runc.tc8ioM.mount: Deactivated successfully.