Mar 14 00:22:23.936659 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:22:23.936699 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:23.936718 kernel: BIOS-provided physical RAM map: Mar 14 00:22:23.936729 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:22:23.936739 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 14 00:22:23.936748 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 14 00:22:23.936762 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 14 00:22:23.936774 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 14 00:22:23.936786 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 14 00:22:23.936802 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 14 00:22:23.936815 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 14 00:22:23.936828 kernel: NX (Execute Disable) protection: active Mar 14 00:22:23.936841 kernel: APIC: Static calls initialized Mar 14 00:22:23.936854 kernel: efi: EFI v2.7 by EDK II Mar 14 00:22:23.936868 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 14 00:22:23.936884 kernel: SMBIOS 2.7 present. Mar 14 00:22:23.936897 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 14 00:22:23.936910 kernel: Hypervisor detected: KVM Mar 14 00:22:23.936921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:22:23.936932 kernel: kvm-clock: using sched offset of 4019566471 cycles Mar 14 00:22:23.936946 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:22:23.936958 kernel: tsc: Detected 2499.996 MHz processor Mar 14 00:22:23.936971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:22:23.936984 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:22:23.936997 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 14 00:22:23.937013 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:22:23.937026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:22:23.937038 kernel: Using GB pages for direct mapping Mar 14 00:22:23.937051 kernel: Secure boot disabled Mar 14 00:22:23.937063 kernel: ACPI: Early table checksum verification disabled Mar 14 00:22:23.937075 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 14 00:22:23.937088 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:22:23.937101 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:22:23.937114 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:22:23.937130 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 14 00:22:23.937143 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 14 00:22:23.937156 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:22:23.937168 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:22:23.937179 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 14 00:22:23.938743 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 14 00:22:23.938771 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:22:23.938790 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:22:23.938806 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 14 00:22:23.938822 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 14 00:22:23.938837 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 14 00:22:23.938853 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 14 00:22:23.938868 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 14 00:22:23.938883 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 14 00:22:23.938902 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 14 00:22:23.938917 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 14 00:22:23.938933 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 14 00:22:23.938949 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 14 00:22:23.938964 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 14 00:22:23.938980 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 14 00:22:23.938996 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:22:23.939011 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:22:23.939027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 14 00:22:23.939046 kernel: NUMA: Initialized distance table, cnt=1 Mar 14 00:22:23.939061 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 14 00:22:23.939076 kernel: Zone ranges: Mar 14 00:22:23.939092 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:22:23.939109 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 14 00:22:23.939124 kernel: Normal empty Mar 14 00:22:23.939140 kernel: Movable zone start for each node Mar 14 00:22:23.939156 kernel: Early memory node ranges Mar 14 00:22:23.939171 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:22:23.939186 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 14 00:22:23.939205 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 14 00:22:23.939220 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 14 00:22:23.939236 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:22:23.939251 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:22:23.939267 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 14 00:22:23.939283 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 14 00:22:23.939317 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:22:23.939331 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:22:23.939344 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 14 00:22:23.939363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:22:23.939379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:22:23.939393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:22:23.939409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:22:23.939425 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:22:23.939441 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:22:23.939457 kernel: TSC deadline timer available Mar 14 00:22:23.939473 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:22:23.939488 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:22:23.939507 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 14 00:22:23.939523 kernel: Booting paravirtualized kernel on KVM Mar 14 00:22:23.939539 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:22:23.939555 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:22:23.939571 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:22:23.939586 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:22:23.939601 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:22:23.939616 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:22:23.939632 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:22:23.939652 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:23.939668 kernel: random: crng init done Mar 14 00:22:23.939683 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:22:23.939699 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:22:23.939715 kernel: Fallback order for Node 0: 0 Mar 14 00:22:23.939731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 14 00:22:23.939746 kernel: Policy zone: DMA32 Mar 14 00:22:23.939762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:22:23.939781 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 14 00:22:23.939797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:22:23.939813 kernel: Kernel/User page tables isolation: enabled Mar 14 00:22:23.939829 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:22:23.939844 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:22:23.939860 kernel: Dynamic Preempt: voluntary Mar 14 00:22:23.939875 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:22:23.939892 kernel: rcu: RCU event tracing is enabled. Mar 14 00:22:23.939908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:22:23.939927 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:22:23.939942 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:22:23.939958 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:22:23.939974 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:22:23.939989 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:22:23.940006 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:22:23.940021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:22:23.940053 kernel: Console: colour dummy device 80x25 Mar 14 00:22:23.940069 kernel: printk: console [tty0] enabled Mar 14 00:22:23.940086 kernel: printk: console [ttyS0] enabled Mar 14 00:22:23.940102 kernel: ACPI: Core revision 20230628 Mar 14 00:22:23.940119 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 14 00:22:23.940140 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:22:23.940156 kernel: x2apic enabled Mar 14 00:22:23.940173 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:22:23.940190 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 14 00:22:23.940207 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 14 00:22:23.940228 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:22:23.940244 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:22:23.940261 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:22:23.940278 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:22:23.940294 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:22:23.940699 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:22:23.940716 kernel: RETBleed: Vulnerable Mar 14 00:22:23.940731 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:22:23.940745 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:23.940761 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:23.940781 kernel: GDS: Unknown: Dependent on hypervisor status Mar 14 00:22:23.940797 kernel: active return thunk: its_return_thunk Mar 14 00:22:23.940811 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:22:23.940825 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:22:23.940840 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:22:23.940854 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:22:23.940870 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 14 00:22:23.940885 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 14 00:22:23.940900 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:22:23.940914 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:22:23.940929 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:22:23.940948 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:22:23.940964 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:22:23.940979 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 14 00:22:23.940995 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 14 00:22:23.941010 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 14 00:22:23.941025 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 14 00:22:23.941041 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 14 00:22:23.941056 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 14 00:22:23.941072 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 14 00:22:23.941086 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:22:23.941101 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:22:23.941121 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:22:23.941138 kernel: landlock: Up and running. Mar 14 00:22:23.941154 kernel: SELinux: Initializing. Mar 14 00:22:23.941170 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:22:23.941187 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:22:23.941203 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 14 00:22:23.941218 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:23.941234 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:23.941250 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:23.941265 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:22:23.941286 kernel: signal: max sigframe size: 3632 Mar 14 00:22:23.942343 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:22:23.942366 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:22:23.942384 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:22:23.942399 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:22:23.942417 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:22:23.942434 kernel: .... node #0, CPUs: #1 Mar 14 00:22:23.942452 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:22:23.942470 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:22:23.942491 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:22:23.942508 kernel: smpboot: Max logical packages: 1 Mar 14 00:22:23.942526 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 14 00:22:23.942542 kernel: devtmpfs: initialized Mar 14 00:22:23.942559 kernel: x86/mm: Memory block size: 128MB Mar 14 00:22:23.942576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 14 00:22:23.942594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:22:23.942611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:22:23.942628 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:22:23.942649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:22:23.942666 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:22:23.942683 kernel: audit: type=2000 audit(1773447743.349:1): state=initialized audit_enabled=0 res=1 Mar 14 00:22:23.942699 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:22:23.942716 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:22:23.942733 kernel: cpuidle: using governor menu Mar 14 00:22:23.942750 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:22:23.942766 kernel: dca service started, version 1.12.1 Mar 14 00:22:23.942783 kernel: PCI: Using configuration type 1 for base access Mar 14 00:22:23.942803 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:22:23.942820 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:22:23.942837 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:22:23.942852 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:22:23.942868 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:22:23.942885 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:22:23.942902 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:22:23.942919 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:22:23.942936 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:22:23.942955 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:22:23.942973 kernel: ACPI: Interpreter enabled Mar 14 00:22:23.942989 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:22:23.943006 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:22:23.943023 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:22:23.943040 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:22:23.943056 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 14 00:22:23.943073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:22:23.944328 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:22:23.944534 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:22:23.944675 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:22:23.944696 kernel: acpiphp: Slot [3] registered Mar 14 00:22:23.944713 kernel: acpiphp: Slot [4] registered Mar 14 00:22:23.944730 kernel: acpiphp: Slot [5] registered Mar 14 00:22:23.944747 kernel: acpiphp: Slot [6] registered Mar 14 00:22:23.944764 kernel: acpiphp: Slot [7] registered Mar 14 00:22:23.944785 kernel: acpiphp: Slot [8] registered Mar 14 00:22:23.944802 kernel: acpiphp: Slot [9] registered Mar 14 00:22:23.944819 kernel: acpiphp: Slot [10] registered Mar 14 00:22:23.944836 kernel: acpiphp: Slot [11] registered Mar 14 00:22:23.944852 kernel: acpiphp: Slot [12] registered Mar 14 00:22:23.944869 kernel: acpiphp: Slot [13] registered Mar 14 00:22:23.944886 kernel: acpiphp: Slot [14] registered Mar 14 00:22:23.944903 kernel: acpiphp: Slot [15] registered Mar 14 00:22:23.944920 kernel: acpiphp: Slot [16] registered Mar 14 00:22:23.944936 kernel: acpiphp: Slot [17] registered Mar 14 00:22:23.944956 kernel: acpiphp: Slot [18] registered Mar 14 00:22:23.944973 kernel: acpiphp: Slot [19] registered Mar 14 00:22:23.944989 kernel: acpiphp: Slot [20] registered Mar 14 00:22:23.945006 kernel: acpiphp: Slot [21] registered Mar 14 00:22:23.945023 kernel: acpiphp: Slot [22] registered Mar 14 00:22:23.945040 kernel: acpiphp: Slot [23] registered Mar 14 00:22:23.945057 kernel: acpiphp: Slot [24] registered Mar 14 00:22:23.945073 kernel: acpiphp: Slot [25] registered Mar 14 00:22:23.945090 kernel: acpiphp: Slot [26] registered Mar 14 00:22:23.945110 kernel: acpiphp: Slot [27] registered Mar 14 00:22:23.945126 kernel: acpiphp: Slot [28] registered Mar 14 00:22:23.945143 kernel: acpiphp: Slot [29] registered Mar 14 00:22:23.945160 kernel: acpiphp: Slot [30] registered Mar 14 00:22:23.945176 kernel: acpiphp: Slot [31] registered Mar 14 00:22:23.945193 kernel: PCI host bridge to bus 0000:00 Mar 14 00:22:23.945358 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:22:23.945488 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:22:23.945642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:22:23.945777 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 14 00:22:23.945899 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 14 00:22:23.946022 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:22:23.946180 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:22:23.947376 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 14 00:22:23.947551 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 14 00:22:23.947698 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:22:23.947835 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 14 00:22:23.947971 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 14 00:22:23.948107 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 14 00:22:23.948241 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 14 00:22:23.952193 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 14 00:22:23.952403 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 14 00:22:23.952654 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 14 00:22:23.952794 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 14 00:22:23.952928 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:22:23.953062 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 14 00:22:23.953199 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:22:23.955045 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:22:23.955212 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 14 00:22:23.955382 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:22:23.955520 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 14 00:22:23.955542 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:22:23.955559 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:22:23.955575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:22:23.955591 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:22:23.955608 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:22:23.955628 kernel: iommu: Default domain type: Translated Mar 14 00:22:23.955645 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:22:23.955661 kernel: efivars: Registered efivars operations Mar 14 00:22:23.955677 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:22:23.955694 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:22:23.955711 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 14 00:22:23.955726 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 14 00:22:23.955859 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 14 00:22:23.955992 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 14 00:22:23.956131 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:22:23.956151 kernel: vgaarb: loaded Mar 14 00:22:23.956168 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 14 00:22:23.956184 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 14 00:22:23.956200 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:22:23.956217 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:22:23.956233 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:22:23.956249 kernel: pnp: PnP ACPI init Mar 14 00:22:23.956265 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:22:23.956285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:22:23.956312 kernel: NET: Registered PF_INET protocol family Mar 14 00:22:23.956337 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:22:23.956359 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 00:22:23.956391 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:22:23.956431 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:23.956443 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 00:22:23.956458 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 00:22:23.956478 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:22:23.956493 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:22:23.956509 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:22:23.956523 kernel: NET: Registered PF_XDP protocol family Mar 14 00:22:23.956700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:22:23.956837 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:22:23.956964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:22:23.957089 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 14 00:22:23.957214 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 14 00:22:23.957466 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:22:23.957490 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:22:23.957507 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:22:23.957523 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 14 00:22:23.957539 kernel: clocksource: Switched to clocksource tsc Mar 14 00:22:23.957555 kernel: Initialise system trusted keyrings Mar 14 00:22:23.957571 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 00:22:23.957585 kernel: Key type asymmetric registered Mar 14 00:22:23.957605 kernel: Asymmetric key parser 'x509' registered Mar 14 00:22:23.957621 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:22:23.957637 kernel: io scheduler mq-deadline registered Mar 14 00:22:23.957652 kernel: io scheduler kyber registered Mar 14 00:22:23.957668 kernel: io scheduler bfq registered Mar 14 00:22:23.957683 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:22:23.957700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:22:23.957716 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:22:23.957732 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:22:23.957751 kernel: i8042: Warning: Keylock active Mar 14 00:22:23.957766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:22:23.957782 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:22:23.957923 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:22:23.958049 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:22:23.958170 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:22:23 UTC (1773447743) Mar 14 00:22:23.958292 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:22:23.958324 kernel: intel_pstate: CPU model not supported Mar 14 00:22:23.958344 kernel: efifb: probing for efifb Mar 14 00:22:23.958359 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 14 00:22:23.958374 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 14 00:22:23.958389 kernel: efifb: scrolling: redraw Mar 14 00:22:23.958405 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:22:23.958419 kernel: Console: switching to colour frame buffer device 100x37 Mar 14 00:22:23.958433 kernel: fb0: EFI VGA frame buffer device Mar 14 00:22:23.958449 kernel: pstore: Using crash dump compression: deflate Mar 14 00:22:23.958464 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:22:23.958483 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:22:23.958506 kernel: Segment Routing with IPv6 Mar 14 00:22:23.958528 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:22:23.958542 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:22:23.958555 kernel: Key type dns_resolver registered Mar 14 00:22:23.958570 kernel: IPI shorthand broadcast: enabled Mar 14 00:22:23.958617 kernel: sched_clock: Marking stable (468003013, 126983496)->(664797237, -69810728) Mar 14 00:22:23.958636 kernel: registered taskstats version 1 Mar 14 00:22:23.958649 kernel: Loading compiled-in X.509 certificates Mar 14 00:22:23.959347 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:22:23.959366 kernel: Key type .fscrypt registered Mar 14 00:22:23.959383 kernel: Key type fscrypt-provisioning registered Mar 14 00:22:23.959399 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:22:23.959417 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:22:23.959434 kernel: ima: No architecture policies found Mar 14 00:22:23.959451 kernel: clk: Disabling unused clocks Mar 14 00:22:23.959468 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:22:23.959485 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:22:23.959505 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:22:23.959521 kernel: Run /init as init process Mar 14 00:22:23.959538 kernel: with arguments: Mar 14 00:22:23.959554 kernel: /init Mar 14 00:22:23.959570 kernel: with environment: Mar 14 00:22:23.959586 kernel: HOME=/ Mar 14 00:22:23.959602 kernel: TERM=linux Mar 14 00:22:23.959622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:23.959642 systemd[1]: Detected virtualization amazon. Mar 14 00:22:23.959663 systemd[1]: Detected architecture x86-64. Mar 14 00:22:23.959680 systemd[1]: Running in initrd. Mar 14 00:22:23.959696 systemd[1]: No hostname configured, using default hostname. Mar 14 00:22:23.959713 systemd[1]: Hostname set to . Mar 14 00:22:23.959730 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:23.959747 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:22:23.959764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:23.959780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:23.959802 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:22:23.959826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:23.959843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:22:23.959865 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:22:23.959890 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:22:23.959910 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:22:23.959926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:23.959945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:23.959964 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:23.959983 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:23.960002 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:23.960020 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:23.960042 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:23.960061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:23.960080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:23.960098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:23.960116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:23.960135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:23.960153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:23.960172 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:23.960193 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:22:23.960213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:23.960231 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:22:23.960250 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:22:23.960268 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:23.960286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:23.960342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:23.960362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:23.960381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:23.960444 systemd-journald[179]: Collecting audit messages is disabled. Mar 14 00:22:23.960486 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:22:23.960510 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:23.960530 systemd-journald[179]: Journal started Mar 14 00:22:23.960567 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2df0a349cca0f0a03bc4bebd78140d) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:22:23.956744 systemd-modules-load[180]: Inserted module 'overlay' Mar 14 00:22:23.968322 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:23.973341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:23.983608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:23.988802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:23.991345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:24.004995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:24.011477 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:22:24.017326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:22:24.022519 kernel: Bridge firewalling registered Mar 14 00:22:24.021289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:24.021952 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 14 00:22:24.026798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:24.029706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:24.035471 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:22:24.037580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:24.039268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:24.059645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:24.064573 dracut-cmdline[209]: dracut-dracut-053 Mar 14 00:22:24.071428 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:24.067560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:24.122934 systemd-resolved[222]: Positive Trust Anchors: Mar 14 00:22:24.122952 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:24.123014 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:24.132727 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 14 00:22:24.134116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:24.135468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:24.160336 kernel: SCSI subsystem initialized Mar 14 00:22:24.171324 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:22:24.181343 kernel: iscsi: registered transport (tcp) Mar 14 00:22:24.203338 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:22:24.203427 kernel: QLogic iSCSI HBA Driver Mar 14 00:22:24.241485 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:24.246513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:22:24.273651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:22:24.273728 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:22:24.273751 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:22:24.316334 kernel: raid6: avx512x4 gen() 18416 MB/s Mar 14 00:22:24.334324 kernel: raid6: avx512x2 gen() 18484 MB/s Mar 14 00:22:24.352324 kernel: raid6: avx512x1 gen() 18443 MB/s Mar 14 00:22:24.370323 kernel: raid6: avx2x4 gen() 18424 MB/s Mar 14 00:22:24.388324 kernel: raid6: avx2x2 gen() 18319 MB/s Mar 14 00:22:24.406585 kernel: raid6: avx2x1 gen() 13904 MB/s Mar 14 00:22:24.406644 kernel: raid6: using algorithm avx512x2 gen() 18484 MB/s Mar 14 00:22:24.425540 kernel: raid6: .... xor() 24768 MB/s, rmw enabled Mar 14 00:22:24.425582 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:22:24.447347 kernel: xor: automatically using best checksumming function avx Mar 14 00:22:24.607335 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:22:24.617081 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:24.621511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:24.637440 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 14 00:22:24.642523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:24.651600 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:22:24.669396 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 14 00:22:24.699510 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:24.704534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:24.755971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:24.765557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:22:24.791103 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:24.793821 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:24.795137 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:24.795670 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:24.804603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:22:24.831764 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:24.861460 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:22:24.870544 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:24.876222 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:22:24.876532 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:22:24.870804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:24.875001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:24.877359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:24.877649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:24.878705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:24.888320 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 14 00:22:24.891683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:24.901290 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:f3:32:e5:a6:4b Mar 14 00:22:24.905739 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:22:24.905800 kernel: AES CTR mode by8 optimization enabled Mar 14 00:22:24.913960 (udev-worker)[455]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:24.915062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:24.924074 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:24.942349 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:22:24.947383 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 14 00:22:24.961690 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:22:24.961510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:24.967472 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:22:24.967527 kernel: GPT:9289727 != 33554431 Mar 14 00:22:24.968323 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:22:24.969556 kernel: GPT:9289727 != 33554431 Mar 14 00:22:24.970384 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:22:24.971446 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:25.042345 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (444) Mar 14 00:22:25.070130 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (453) Mar 14 00:22:25.095982 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:22:25.132140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:22:25.141891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:22:25.147713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:22:25.148187 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:22:25.159521 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:22:25.166989 disk-uuid[627]: Primary Header is updated. Mar 14 00:22:25.166989 disk-uuid[627]: Secondary Entries is updated. Mar 14 00:22:25.166989 disk-uuid[627]: Secondary Header is updated. Mar 14 00:22:25.175346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:25.183249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:25.187323 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:26.190908 disk-uuid[628]: The operation has completed successfully. Mar 14 00:22:26.192517 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:26.323570 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:22:26.323708 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:22:26.345553 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:22:26.350269 sh[971]: Success Mar 14 00:22:26.371326 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:22:26.462426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:22:26.477502 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:22:26.479075 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:22:26.509382 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:22:26.509450 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:26.512574 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:22:26.512629 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:22:26.513935 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:22:26.592347 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:22:26.603129 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:22:26.604370 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:22:26.610466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:22:26.612562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:22:26.643945 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:26.644020 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:26.644045 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:26.653344 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:26.669560 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:26.669061 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:22:26.677021 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:22:26.685571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:22:26.717187 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:26.725628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:26.760035 systemd-networkd[1163]: lo: Link UP Mar 14 00:22:26.760048 systemd-networkd[1163]: lo: Gained carrier Mar 14 00:22:26.761963 systemd-networkd[1163]: Enumeration completed Mar 14 00:22:26.762621 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:26.762792 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.762797 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:26.764495 systemd[1]: Reached target network.target - Network. Mar 14 00:22:26.766072 systemd-networkd[1163]: eth0: Link UP Mar 14 00:22:26.766077 systemd-networkd[1163]: eth0: Gained carrier Mar 14 00:22:26.766089 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.777414 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.20.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:22:26.913953 ignition[1115]: Ignition 2.19.0 Mar 14 00:22:26.913967 ignition[1115]: Stage: fetch-offline Mar 14 00:22:26.914260 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:26.914274 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:26.916468 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:26.914616 ignition[1115]: Ignition finished successfully Mar 14 00:22:26.922488 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:22:26.937273 ignition[1174]: Ignition 2.19.0 Mar 14 00:22:26.937287 ignition[1174]: Stage: fetch Mar 14 00:22:26.937750 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:26.937763 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:26.937899 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:26.978818 ignition[1174]: PUT result: OK Mar 14 00:22:26.984389 ignition[1174]: parsed url from cmdline: "" Mar 14 00:22:26.984462 ignition[1174]: no config URL provided Mar 14 00:22:26.984475 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:26.984493 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:26.984521 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:26.986209 ignition[1174]: PUT result: OK Mar 14 00:22:26.986267 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:22:26.991249 ignition[1174]: GET result: OK Mar 14 00:22:26.991376 ignition[1174]: parsing config with SHA512: 6664c1f85840c2caddcba9254fb6d75127a8b09b01478ef14da2633ccee88496fb585ba61c06893c1f1a0737f9341eaf9c59c4ad3eb2425a985090f2e23fbe50 Mar 14 00:22:26.995792 unknown[1174]: fetched base config from "system" Mar 14 00:22:26.995807 unknown[1174]: fetched base config from "system" Mar 14 00:22:26.996705 ignition[1174]: fetch: fetch complete Mar 14 00:22:26.995816 unknown[1174]: fetched user config from "aws" Mar 14 00:22:26.996713 ignition[1174]: fetch: fetch passed Mar 14 00:22:26.998756 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:22:26.996783 ignition[1174]: Ignition finished successfully Mar 14 00:22:27.008637 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:22:27.024252 ignition[1180]: Ignition 2.19.0 Mar 14 00:22:27.024265 ignition[1180]: Stage: kargs Mar 14 00:22:27.024800 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:27.024815 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:27.024931 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:27.025759 ignition[1180]: PUT result: OK Mar 14 00:22:27.028391 ignition[1180]: kargs: kargs passed Mar 14 00:22:27.028515 ignition[1180]: Ignition finished successfully Mar 14 00:22:27.030475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:22:27.038531 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:22:27.051551 ignition[1186]: Ignition 2.19.0 Mar 14 00:22:27.051566 ignition[1186]: Stage: disks Mar 14 00:22:27.052119 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:27.052133 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:27.052279 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:27.054154 ignition[1186]: PUT result: OK Mar 14 00:22:27.056968 ignition[1186]: disks: disks passed Mar 14 00:22:27.057045 ignition[1186]: Ignition finished successfully Mar 14 00:22:27.058904 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:22:27.059538 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:27.059910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:27.060579 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:27.061128 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:27.061702 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:27.067486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:22:27.099247 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:22:27.103426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:22:27.108536 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:22:27.211349 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:22:27.212350 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:22:27.213829 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:27.226455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:27.229431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:22:27.231252 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:22:27.232388 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:22:27.232430 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:27.244201 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:22:27.255550 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1213) Mar 14 00:22:27.255583 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:27.255604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:27.255624 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:27.255575 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:22:27.262327 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:27.265009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:27.469515 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:22:27.475217 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:22:27.480628 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:22:27.485803 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:22:27.700940 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:27.706439 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:22:27.709521 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:22:27.719885 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:22:27.722325 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:27.752657 ignition[1325]: INFO : Ignition 2.19.0 Mar 14 00:22:27.752657 ignition[1325]: INFO : Stage: mount Mar 14 00:22:27.752657 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:27.752657 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:27.752657 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:27.758380 ignition[1325]: INFO : PUT result: OK Mar 14 00:22:27.758818 ignition[1325]: INFO : mount: mount passed Mar 14 00:22:27.758818 ignition[1325]: INFO : Ignition finished successfully Mar 14 00:22:27.761007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:22:27.767690 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:22:27.771233 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:22:27.781528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:27.800339 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1337) Mar 14 00:22:27.800485 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:27.803481 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:27.803530 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:27.810330 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:27.812132 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:27.840650 ignition[1354]: INFO : Ignition 2.19.0 Mar 14 00:22:27.840650 ignition[1354]: INFO : Stage: files Mar 14 00:22:27.842152 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:27.842152 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:27.842152 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:27.842152 ignition[1354]: INFO : PUT result: OK Mar 14 00:22:27.845321 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:22:27.846071 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:22:27.846071 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:22:27.867754 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:22:27.869066 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:22:27.869066 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:22:27.868317 unknown[1354]: wrote ssh authorized keys file for user: core Mar 14 00:22:27.871868 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:27.871868 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:27.982539 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:22:28.161998 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:28.161998 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:22:28.164961 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 14 00:22:28.604913 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:22:28.684720 systemd-networkd[1163]: eth0: Gained IPv6LL Mar 14 00:22:29.080967 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:22:29.080967 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:29.084778 ignition[1354]: INFO : files: files passed Mar 14 00:22:29.084778 ignition[1354]: INFO : Ignition finished successfully Mar 14 00:22:29.084456 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:22:29.091642 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:22:29.098418 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:22:29.100879 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:22:29.101656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:22:29.115663 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:29.115663 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:29.118715 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:29.120708 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:29.121678 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:22:29.130487 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:22:29.157986 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:22:29.158118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:22:29.159366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:22:29.160521 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:22:29.161277 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:22:29.162514 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:22:29.179592 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:29.184576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:22:29.197327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:29.197985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:29.198924 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:22:29.199915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:22:29.200090 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:29.201347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:22:29.202203 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:22:29.202986 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:22:29.203759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:29.204637 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:29.205416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:22:29.206161 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:29.206936 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:22:29.208073 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:22:29.208885 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:22:29.209609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:22:29.209782 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:29.210852 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:29.211657 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:29.212340 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:22:29.213113 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:29.213666 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:22:29.213833 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:29.215292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:22:29.215484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:29.216179 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:22:29.216342 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:22:29.229603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:22:29.230238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:22:29.230451 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:29.234636 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:22:29.235802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:22:29.236047 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:29.237675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:22:29.237882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:29.253461 ignition[1406]: INFO : Ignition 2.19.0 Mar 14 00:22:29.253461 ignition[1406]: INFO : Stage: umount Mar 14 00:22:29.253461 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:29.253461 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:29.253461 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:29.259664 ignition[1406]: INFO : PUT result: OK Mar 14 00:22:29.254724 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:22:29.265440 ignition[1406]: INFO : umount: umount passed Mar 14 00:22:29.265440 ignition[1406]: INFO : Ignition finished successfully Mar 14 00:22:29.254882 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:22:29.264104 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:22:29.264238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:22:29.265733 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:22:29.265806 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:22:29.266590 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:22:29.266649 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:22:29.267382 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:22:29.267439 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:22:29.268238 systemd[1]: Stopped target network.target - Network. Mar 14 00:22:29.268746 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:22:29.268805 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:29.269378 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:22:29.269836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:22:29.275294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:29.276279 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:22:29.276697 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:22:29.277353 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:22:29.277415 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:29.278330 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:22:29.278384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:29.279413 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:22:29.279471 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:22:29.279780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:22:29.279820 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:29.280273 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:22:29.280853 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:22:29.284389 systemd-networkd[1163]: eth0: DHCPv6 lease lost Mar 14 00:22:29.287066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:22:29.287914 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:22:29.288059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:22:29.290155 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:22:29.290292 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:22:29.292808 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:22:29.292973 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:22:29.295344 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:22:29.295404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:29.296134 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:22:29.296194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:29.302412 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:22:29.302927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:22:29.302998 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:29.303588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:29.303645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:29.304140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:22:29.304179 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:29.304760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:22:29.304815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:29.305556 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:29.323690 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:22:29.324050 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:29.326210 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:22:29.326363 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:22:29.327958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:22:29.328044 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:29.328737 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:22:29.328786 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:29.329472 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:22:29.329536 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:29.330579 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:22:29.330637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:29.331656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:29.331713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:29.338475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:22:29.339790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:22:29.340515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:29.341147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:29.341205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:29.346491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:22:29.346626 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:22:29.347806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:22:29.358525 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:22:29.367149 systemd[1]: Switching root. Mar 14 00:22:29.400550 systemd-journald[179]: Journal stopped Mar 14 00:22:31.822557 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 14 00:22:31.822666 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:22:31.822692 kernel: SELinux: policy capability open_perms=1 Mar 14 00:22:31.822713 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:22:31.822733 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:22:31.822753 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:22:31.822775 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:22:31.822803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:22:31.822829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:22:31.822851 kernel: audit: type=1403 audit(1773447750.556:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:22:31.822876 systemd[1]: Successfully loaded SELinux policy in 53.399ms. Mar 14 00:22:31.822907 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.683ms. Mar 14 00:22:31.822931 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:31.822953 systemd[1]: Detected virtualization amazon. Mar 14 00:22:31.822974 systemd[1]: Detected architecture x86-64. Mar 14 00:22:31.822995 systemd[1]: Detected first boot. Mar 14 00:22:31.823016 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:31.823037 zram_generator::config[1448]: No configuration found. Mar 14 00:22:31.823069 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:22:31.823090 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:22:31.823112 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:22:31.823135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:31.823157 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:22:31.823179 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:22:31.823201 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:22:31.823229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:22:31.823252 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:22:31.823276 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:22:31.823298 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:22:31.824377 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:22:31.824400 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:31.824445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:31.824468 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:22:31.824489 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:22:31.824511 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:22:31.824534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:31.824561 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:22:31.824583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:31.824604 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:22:31.824626 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:22:31.824649 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:31.824672 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:22:31.824693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:31.824720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:31.824742 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:31.824763 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:31.824785 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:22:31.824806 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:22:31.824829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:31.824850 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:31.824876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:31.824897 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:22:31.824918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:22:31.824942 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:22:31.824962 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:22:31.824982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:31.825002 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:22:31.825022 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:22:31.825042 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:22:31.825062 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:22:31.825081 systemd[1]: Reached target machines.target - Containers. Mar 14 00:22:31.825105 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:22:31.825125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:31.825145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:31.825164 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:22:31.825185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:31.825204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:31.825223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:31.825244 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:22:31.825264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:31.825287 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:22:31.826366 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:22:31.826394 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:22:31.827358 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:22:31.827399 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:22:31.827422 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:31.827443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:31.827465 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:22:31.827493 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:22:31.827515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:31.827536 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:22:31.827557 systemd[1]: Stopped verity-setup.service. Mar 14 00:22:31.827580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:31.827601 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:22:31.827629 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:22:31.827649 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:22:31.827669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:22:31.827694 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:22:31.827713 kernel: ACPI: bus type drm_connector registered Mar 14 00:22:31.827733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:22:31.827753 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:31.827773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:22:31.827796 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:22:31.827816 kernel: loop: module loaded Mar 14 00:22:31.827834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:31.827853 kernel: fuse: init (API version 7.39) Mar 14 00:22:31.827871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:31.827926 systemd-journald[1540]: Collecting audit messages is disabled. Mar 14 00:22:31.827962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:22:31.827985 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:31.828004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:31.828023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:31.828043 systemd-journald[1540]: Journal started Mar 14 00:22:31.828079 systemd-journald[1540]: Runtime Journal (/run/log/journal/ec2df0a349cca0f0a03bc4bebd78140d) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:22:31.433671 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:22:31.830400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:31.461661 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:22:31.462097 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:22:31.831549 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:31.834425 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:22:31.834761 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:22:31.835918 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:31.836132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:31.837350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:31.838557 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:22:31.839927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:22:31.855823 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:22:31.866925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:22:31.873680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:22:31.875479 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:22:31.875540 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:31.879193 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:22:31.887487 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:22:31.897538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:22:31.899247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:31.906528 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:22:31.909518 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:22:31.911293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:31.916588 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:22:31.918481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:31.924541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:31.930524 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:22:31.934015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:22:31.939031 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:22:31.940867 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:22:31.942635 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:22:31.962519 systemd-journald[1540]: Time spent on flushing to /var/log/journal/ec2df0a349cca0f0a03bc4bebd78140d is 91.241ms for 979 entries. Mar 14 00:22:31.962519 systemd-journald[1540]: System Journal (/var/log/journal/ec2df0a349cca0f0a03bc4bebd78140d) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:22:32.068149 systemd-journald[1540]: Received client request to flush runtime journal. Mar 14 00:22:32.068215 kernel: loop0: detected capacity change from 0 to 142488 Mar 14 00:22:31.969750 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:31.982581 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:22:31.988480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:22:31.990751 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:22:31.996611 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:22:32.054619 udevadm[1583]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:22:32.072381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:22:32.091878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:32.147586 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:22:32.148413 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:22:32.174189 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:22:32.184068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:32.203336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:22:32.230332 kernel: loop1: detected capacity change from 0 to 219192 Mar 14 00:22:32.232227 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 14 00:22:32.232255 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 14 00:22:32.246131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:32.433434 kernel: loop2: detected capacity change from 0 to 140768 Mar 14 00:22:32.590332 kernel: loop3: detected capacity change from 0 to 61336 Mar 14 00:22:32.712428 kernel: loop4: detected capacity change from 0 to 142488 Mar 14 00:22:32.774337 kernel: loop5: detected capacity change from 0 to 219192 Mar 14 00:22:32.784460 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:22:32.790569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:32.816327 kernel: loop6: detected capacity change from 0 to 140768 Mar 14 00:22:32.821405 systemd-udevd[1605]: Using default interface naming scheme 'v255'. Mar 14 00:22:32.841345 kernel: loop7: detected capacity change from 0 to 61336 Mar 14 00:22:32.853516 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:22:32.854166 (sd-merge)[1603]: Merged extensions into '/usr'. Mar 14 00:22:32.859094 systemd[1]: Reloading requested from client PID 1577 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:22:32.859111 systemd[1]: Reloading... Mar 14 00:22:32.988463 zram_generator::config[1646]: No configuration found. Mar 14 00:22:33.045786 (udev-worker)[1611]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:33.207356 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:22:33.234036 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:22:33.234135 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 14 00:22:33.234164 kernel: ACPI: button: Sleep Button [SLPF] Mar 14 00:22:33.255382 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1621) Mar 14 00:22:33.267358 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 14 00:22:33.283534 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:22:33.371930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:33.445332 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:22:33.542623 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:22:33.543153 systemd[1]: Reloading finished in 683 ms. Mar 14 00:22:33.569589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:33.570384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:22:33.587603 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:22:33.607097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:22:33.612612 systemd[1]: Starting ensure-sysext.service... Mar 14 00:22:33.615537 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:22:33.620519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:22:33.630627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:33.635505 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:33.639761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:33.658352 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:22:33.659224 systemd[1]: Reloading requested from client PID 1791 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:22:33.659248 systemd[1]: Reloading... Mar 14 00:22:33.663391 ldconfig[1572]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:22:33.677345 lvm[1792]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:33.745400 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:22:33.748604 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:22:33.754721 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:22:33.756928 systemd-tmpfiles[1795]: ACLs are not supported, ignoring. Mar 14 00:22:33.758588 systemd-tmpfiles[1795]: ACLs are not supported, ignoring. Mar 14 00:22:33.773603 systemd-tmpfiles[1795]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:33.773621 systemd-tmpfiles[1795]: Skipping /boot Mar 14 00:22:33.805865 systemd-tmpfiles[1795]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:33.808524 systemd-tmpfiles[1795]: Skipping /boot Mar 14 00:22:33.818424 zram_generator::config[1832]: No configuration found. Mar 14 00:22:33.905688 systemd-networkd[1794]: lo: Link UP Mar 14 00:22:33.906063 systemd-networkd[1794]: lo: Gained carrier Mar 14 00:22:33.907882 systemd-networkd[1794]: Enumeration completed Mar 14 00:22:33.908519 systemd-networkd[1794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:33.908529 systemd-networkd[1794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:33.911671 systemd-networkd[1794]: eth0: Link UP Mar 14 00:22:33.911859 systemd-networkd[1794]: eth0: Gained carrier Mar 14 00:22:33.911881 systemd-networkd[1794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:33.922389 systemd-networkd[1794]: eth0: DHCPv4 address 172.31.20.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:22:34.001025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:34.077524 systemd[1]: Reloading finished in 416 ms. Mar 14 00:22:34.096764 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:22:34.097478 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:34.098165 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:22:34.101889 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:22:34.102807 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:22:34.103837 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:34.104829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:34.124830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:34.131663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:34.137449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:22:34.147459 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:22:34.152650 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:22:34.157774 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:22:34.162332 lvm[1896]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:34.163148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:34.175662 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:22:34.182651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.182952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:34.198674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:34.203644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:34.208381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:34.209491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:34.209747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.214815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.215125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:34.216423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:34.216587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.222081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.223446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:34.232697 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:34.237159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:34.237527 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:22:34.238238 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:34.240092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:22:34.241413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:34.247291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:34.249814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:34.251262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:34.260807 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:34.261017 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:34.276590 systemd[1]: Finished ensure-sysext.service. Mar 14 00:22:34.279327 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:22:34.281213 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:34.281616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:34.286536 augenrules[1924]: No rules Mar 14 00:22:34.289724 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:34.292205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:34.292556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:34.303718 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:22:34.311608 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:22:34.337819 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:22:34.338765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:22:34.343344 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:22:34.349979 systemd-resolved[1904]: Positive Trust Anchors: Mar 14 00:22:34.349998 systemd-resolved[1904]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:34.350045 systemd-resolved[1904]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:34.354929 systemd-resolved[1904]: Defaulting to hostname 'linux'. Mar 14 00:22:34.356886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:34.357408 systemd[1]: Reached target network.target - Network. Mar 14 00:22:34.357806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:34.358173 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:34.358651 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:22:34.359053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:22:34.359599 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:22:34.360039 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:22:34.360476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:22:34.360834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:22:34.360874 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:34.361224 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:34.362573 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:22:34.364441 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:22:34.372582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:22:34.373741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:22:34.374253 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:34.374652 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:34.375064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:34.375109 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:34.376224 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:22:34.380511 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:22:34.387509 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:22:34.389271 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:22:34.397201 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:22:34.397815 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:22:34.401213 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:22:34.423537 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:22:34.429484 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:22:34.432392 jq[1942]: false Mar 14 00:22:34.433977 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:22:34.443512 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:22:34.449420 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:22:34.460533 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:22:34.470934 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:22:34.471646 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:22:34.478561 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:22:34.484469 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:22:34.495855 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:22:34.496122 extend-filesystems[1943]: Found loop4 Mar 14 00:22:34.497132 extend-filesystems[1943]: Found loop5 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found loop6 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found loop7 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p1 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p2 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p3 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found usr Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p4 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p6 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p7 Mar 14 00:22:34.498427 extend-filesystems[1943]: Found nvme0n1p9 Mar 14 00:22:34.498427 extend-filesystems[1943]: Checking size of /dev/nvme0n1p9 Mar 14 00:22:34.497621 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:22:34.508802 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:22:34.509111 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:22:34.521789 extend-filesystems[1943]: Resized partition /dev/nvme0n1p9 Mar 14 00:22:34.528670 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:22:34.534323 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:22:34.570844 dbus-daemon[1941]: [system] SELinux support is enabled Mar 14 00:22:34.573731 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:22:34.580793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:22:34.580857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:22:34.581418 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:22:34.581452 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:22:34.605808 update_engine[1953]: I20260314 00:22:34.601872 1953 main.cc:92] Flatcar Update Engine starting Mar 14 00:22:34.610903 jq[1955]: true Mar 14 00:22:34.627538 dbus-daemon[1941]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1794 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:22:34.629447 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:22:34.629723 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:22:34.644919 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: ---------------------------------------------------- Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: corporation. Support and training for ntp-4 are Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: available at https://www.nwtime.org/support Mar 14 00:22:34.648281 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: ---------------------------------------------------- Mar 14 00:22:34.644957 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:22:34.660645 update_engine[1953]: I20260314 00:22:34.652577 1953 update_check_scheduler.cc:74] Next update check in 4m16s Mar 14 00:22:34.648650 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:22:34.662406 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: proto: precision = 0.078 usec (-24) Mar 14 00:22:34.644971 ntpd[1945]: ---------------------------------------------------- Mar 14 00:22:34.663607 tar[1959]: linux-amd64/LICENSE Mar 14 00:22:34.663607 tar[1959]: linux-amd64/helm Mar 14 00:22:34.654777 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:22:34.644982 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:22:34.665609 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: basedate set to 2026-03-01 Mar 14 00:22:34.665609 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: gps base set to 2026-03-01 (week 2408) Mar 14 00:22:34.655992 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:22:34.644994 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:22:34.659339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:22:34.645006 ntpd[1945]: corporation. Support and training for ntp-4 are Mar 14 00:22:34.645018 ntpd[1945]: available at https://www.nwtime.org/support Mar 14 00:22:34.645031 ntpd[1945]: ---------------------------------------------------- Mar 14 00:22:34.656995 ntpd[1945]: proto: precision = 0.078 usec (-24) Mar 14 00:22:34.663773 ntpd[1945]: basedate set to 2026-03-01 Mar 14 00:22:34.663794 ntpd[1945]: gps base set to 2026-03-01 (week 2408) Mar 14 00:22:34.679915 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:22:34.681997 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:22:34.681997 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:22:34.679976 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:22:34.682449 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:22:34.682999 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:22:34.682501 ntpd[1945]: Listen normally on 3 eth0 172.31.20.55:123 Mar 14 00:22:34.685871 ntpd[1945]: Listen normally on 4 lo [::1]:123 Mar 14 00:22:34.688409 jq[1987]: true Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listen normally on 3 eth0 172.31.20.55:123 Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listen normally on 4 lo [::1]:123 Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: bind(21) AF_INET6 fe80::4f3:32ff:fee5:a64b%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: unable to create socket on eth0 (5) for fe80::4f3:32ff:fee5:a64b%2#123 Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: failed to init interface for address fe80::4f3:32ff:fee5:a64b%2 Mar 14 00:22:34.688691 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Mar 14 00:22:34.686111 ntpd[1945]: bind(21) AF_INET6 fe80::4f3:32ff:fee5:a64b%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:34.686142 ntpd[1945]: unable to create socket on eth0 (5) for fe80::4f3:32ff:fee5:a64b%2#123 Mar 14 00:22:34.686158 ntpd[1945]: failed to init interface for address fe80::4f3:32ff:fee5:a64b%2 Mar 14 00:22:34.686202 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Mar 14 00:22:34.705337 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:22:34.698921 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:22:34.691849 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:34.743737 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:34.743737 ntpd[1945]: 14 Mar 00:22:34 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:34.718521 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:34.755484 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:22:34.755484 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:22:34.755484 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:22:34.754387 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:22:34.763511 extend-filesystems[1943]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:22:34.754646 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:22:34.791178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1616) Mar 14 00:22:34.821135 coreos-metadata[1940]: Mar 14 00:22:34.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:22:34.827418 coreos-metadata[1940]: Mar 14 00:22:34.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:22:34.827418 coreos-metadata[1940]: Mar 14 00:22:34.826 INFO Fetch successful Mar 14 00:22:34.827418 coreos-metadata[1940]: Mar 14 00:22:34.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:22:34.829138 coreos-metadata[1940]: Mar 14 00:22:34.828 INFO Fetch successful Mar 14 00:22:34.829138 coreos-metadata[1940]: Mar 14 00:22:34.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:22:34.836501 coreos-metadata[1940]: Mar 14 00:22:34.829 INFO Fetch successful Mar 14 00:22:34.836501 coreos-metadata[1940]: Mar 14 00:22:34.829 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:22:34.838038 coreos-metadata[1940]: Mar 14 00:22:34.837 INFO Fetch successful Mar 14 00:22:34.838038 coreos-metadata[1940]: Mar 14 00:22:34.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:22:34.839233 coreos-metadata[1940]: Mar 14 00:22:34.839 INFO Fetch failed with 404: resource not found Mar 14 00:22:34.839233 coreos-metadata[1940]: Mar 14 00:22:34.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:22:34.847907 coreos-metadata[1940]: Mar 14 00:22:34.842 INFO Fetch successful Mar 14 00:22:34.847907 coreos-metadata[1940]: Mar 14 00:22:34.842 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:22:34.847907 coreos-metadata[1940]: Mar 14 00:22:34.845 INFO Fetch successful Mar 14 00:22:34.847907 coreos-metadata[1940]: Mar 14 00:22:34.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:22:34.852171 coreos-metadata[1940]: Mar 14 00:22:34.851 INFO Fetch successful Mar 14 00:22:34.852171 coreos-metadata[1940]: Mar 14 00:22:34.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:22:34.852714 coreos-metadata[1940]: Mar 14 00:22:34.852 INFO Fetch successful Mar 14 00:22:34.852714 coreos-metadata[1940]: Mar 14 00:22:34.852 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:22:34.853510 coreos-metadata[1940]: Mar 14 00:22:34.853 INFO Fetch successful Mar 14 00:22:34.866375 bash[2020]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:34.890987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:22:34.901597 systemd[1]: Starting sshkeys.service... Mar 14 00:22:34.917516 systemd-logind[1952]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:22:34.917548 systemd-logind[1952]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 14 00:22:34.917574 systemd-logind[1952]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:22:34.937499 systemd-logind[1952]: New seat seat0. Mar 14 00:22:34.939899 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:22:34.998428 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:22:35.008816 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:22:35.017060 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:22:35.018368 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:22:35.172095 dbus-daemon[1941]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:22:35.174722 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:22:35.196825 dbus-daemon[1941]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1994 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:22:35.212184 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:22:35.283536 polkitd[2096]: Started polkitd version 121 Mar 14 00:22:35.284242 locksmithd[1998]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:22:35.285603 coreos-metadata[2051]: Mar 14 00:22:35.285 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:22:35.292806 coreos-metadata[2051]: Mar 14 00:22:35.289 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:22:35.292806 coreos-metadata[2051]: Mar 14 00:22:35.292 INFO Fetch successful Mar 14 00:22:35.292806 coreos-metadata[2051]: Mar 14 00:22:35.292 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:22:35.299329 coreos-metadata[2051]: Mar 14 00:22:35.297 INFO Fetch successful Mar 14 00:22:35.300819 unknown[2051]: wrote ssh authorized keys file for user: core Mar 14 00:22:35.318271 polkitd[2096]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:22:35.318371 polkitd[2096]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:22:35.333768 polkitd[2096]: Finished loading, compiling and executing 2 rules Mar 14 00:22:35.337382 update-ssh-keys[2119]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:35.338797 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:22:35.341738 systemd-networkd[1794]: eth0: Gained IPv6LL Mar 14 00:22:35.345235 systemd[1]: Finished sshkeys.service. Mar 14 00:22:35.354151 dbus-daemon[1941]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:22:35.355447 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:22:35.364368 polkitd[2096]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:22:35.358404 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:22:35.362636 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:22:35.372760 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:22:35.383470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:35.385745 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:22:35.472033 systemd-resolved[1904]: System hostname changed to 'ip-172-31-20-55'. Mar 14 00:22:35.474532 systemd-hostnamed[1994]: Hostname set to (transient) Mar 14 00:22:35.521295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:22:35.621332 amazon-ssm-agent[2130]: Initializing new seelog logger Mar 14 00:22:35.624438 amazon-ssm-agent[2130]: New Seelog Logger Creation Complete Mar 14 00:22:35.624652 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.625331 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.625331 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 processing appconfig overrides Mar 14 00:22:35.628528 containerd[1977]: time="2026-03-14T00:22:35.627699350Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:22:35.629416 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.629639 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.630231 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 processing appconfig overrides Mar 14 00:22:35.630685 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.631339 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.631521 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 processing appconfig overrides Mar 14 00:22:35.632380 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO Proxy environment variables: Mar 14 00:22:35.638766 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.638766 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:35.638766 amazon-ssm-agent[2130]: 2026/03/14 00:22:35 processing appconfig overrides Mar 14 00:22:35.733619 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO https_proxy: Mar 14 00:22:35.752246 containerd[1977]: time="2026-03-14T00:22:35.752004435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.758865 containerd[1977]: time="2026-03-14T00:22:35.758809903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:35.758865 containerd[1977]: time="2026-03-14T00:22:35.758862777Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:22:35.759029 containerd[1977]: time="2026-03-14T00:22:35.758885490Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:22:35.759090 containerd[1977]: time="2026-03-14T00:22:35.759062966Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:22:35.759144 containerd[1977]: time="2026-03-14T00:22:35.759124269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.759238 containerd[1977]: time="2026-03-14T00:22:35.759216721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:35.759280 containerd[1977]: time="2026-03-14T00:22:35.759242297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.759778297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.759854203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.759877047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.759895208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.760001186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.760875296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.762485802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.762513747Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.762612903Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:22:35.763336 containerd[1977]: time="2026-03-14T00:22:35.762663829Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:22:35.768965 containerd[1977]: time="2026-03-14T00:22:35.768917206Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:22:35.769067 containerd[1977]: time="2026-03-14T00:22:35.768995689Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:22:35.769067 containerd[1977]: time="2026-03-14T00:22:35.769019447Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:22:35.769067 containerd[1977]: time="2026-03-14T00:22:35.769041505Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:22:35.769067 containerd[1977]: time="2026-03-14T00:22:35.769061987Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:22:35.769275 containerd[1977]: time="2026-03-14T00:22:35.769251545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:22:35.771914 containerd[1977]: time="2026-03-14T00:22:35.771881692Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:22:35.772124 containerd[1977]: time="2026-03-14T00:22:35.772099776Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:22:35.772174 containerd[1977]: time="2026-03-14T00:22:35.772132072Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:22:35.772213 containerd[1977]: time="2026-03-14T00:22:35.772171097Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:22:35.772213 containerd[1977]: time="2026-03-14T00:22:35.772194650Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.772298 containerd[1977]: time="2026-03-14T00:22:35.772215984Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.772298 containerd[1977]: time="2026-03-14T00:22:35.772236306Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.772298 containerd[1977]: time="2026-03-14T00:22:35.772257197Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772278230Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772545282Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772566747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772585870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772617379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772638304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772656950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772676709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772696298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772761055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772779336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772798769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772831417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774319 containerd[1977]: time="2026-03-14T00:22:35.772853729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.772888826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.772908983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.772929414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.772994703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773025670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773043756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773061207Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773134188Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773159169Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773288882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773323009Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773340225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773359016Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:22:35.774854 containerd[1977]: time="2026-03-14T00:22:35.773373967Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:22:35.775339 containerd[1977]: time="2026-03-14T00:22:35.773389663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:22:35.775381 containerd[1977]: time="2026-03-14T00:22:35.774949135Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:22:35.775381 containerd[1977]: time="2026-03-14T00:22:35.775038310Z" level=info msg="Connect containerd service" Mar 14 00:22:35.775381 containerd[1977]: time="2026-03-14T00:22:35.775099037Z" level=info msg="using legacy CRI server" Mar 14 00:22:35.775381 containerd[1977]: time="2026-03-14T00:22:35.775110448Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:22:35.775697 containerd[1977]: time="2026-03-14T00:22:35.775465244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:22:35.785191 containerd[1977]: time="2026-03-14T00:22:35.785096146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:22:35.786998 containerd[1977]: time="2026-03-14T00:22:35.786965424Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:22:35.787080 containerd[1977]: time="2026-03-14T00:22:35.787041890Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:22:35.787121 containerd[1977]: time="2026-03-14T00:22:35.787085471Z" level=info msg="Start subscribing containerd event" Mar 14 00:22:35.787157 containerd[1977]: time="2026-03-14T00:22:35.787136961Z" level=info msg="Start recovering state" Mar 14 00:22:35.787237 containerd[1977]: time="2026-03-14T00:22:35.787219766Z" level=info msg="Start event monitor" Mar 14 00:22:35.787277 containerd[1977]: time="2026-03-14T00:22:35.787246851Z" level=info msg="Start snapshots syncer" Mar 14 00:22:35.787277 containerd[1977]: time="2026-03-14T00:22:35.787260867Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:22:35.787277 containerd[1977]: time="2026-03-14T00:22:35.787273078Z" level=info msg="Start streaming server" Mar 14 00:22:35.788450 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:22:35.789377 containerd[1977]: time="2026-03-14T00:22:35.789138271Z" level=info msg="containerd successfully booted in 0.163206s" Mar 14 00:22:35.836433 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO http_proxy: Mar 14 00:22:35.934398 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO no_proxy: Mar 14 00:22:36.032879 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:22:36.131224 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:22:36.230402 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO Agent will take identity from EC2 Mar 14 00:22:36.267318 tar[1959]: linux-amd64/README.md Mar 14 00:22:36.297710 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:22:36.330883 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:36.430142 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:36.529891 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:36.630374 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:22:36.715755 sshd_keygen[1973]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:22:36.728826 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 14 00:22:36.764777 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:22:36.781653 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:22:36.790927 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:22:36.791155 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:22:36.802341 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:22:36.826667 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:22:36.829060 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:22:36.834599 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:22:36.837170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:22:36.838085 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:22:36.872644 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:22:36.872779 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [Registrar] Starting registrar module Mar 14 00:22:36.872836 amazon-ssm-agent[2130]: 2026-03-14 00:22:35 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:22:36.872880 amazon-ssm-agent[2130]: 2026-03-14 00:22:36 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:22:36.872945 amazon-ssm-agent[2130]: 2026-03-14 00:22:36 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:22:36.873027 amazon-ssm-agent[2130]: 2026-03-14 00:22:36 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:22:36.873084 amazon-ssm-agent[2130]: 2026-03-14 00:22:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:22:36.929327 amazon-ssm-agent[2130]: 2026-03-14 00:22:36 INFO [CredentialRefresher] Next credential rotation will be in 32.01665295081666 minutes Mar 14 00:22:37.645471 ntpd[1945]: Listen normally on 6 eth0 [fe80::4f3:32ff:fee5:a64b%2]:123 Mar 14 00:22:37.645852 ntpd[1945]: 14 Mar 00:22:37 ntpd[1945]: Listen normally on 6 eth0 [fe80::4f3:32ff:fee5:a64b%2]:123 Mar 14 00:22:37.840008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:22:37.847402 systemd[1]: Started sshd@0-172.31.20.55:22-68.220.241.50:55750.service - OpenSSH per-connection server daemon (68.220.241.50:55750). Mar 14 00:22:37.890164 amazon-ssm-agent[2130]: 2026-03-14 00:22:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:22:37.991419 amazon-ssm-agent[2130]: 2026-03-14 00:22:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2187) started Mar 14 00:22:38.067486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:38.068902 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:22:38.070472 systemd[1]: Startup finished in 596ms (kernel) + 6.850s (initrd) + 7.565s (userspace) = 15.012s. Mar 14 00:22:38.075915 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:38.092674 amazon-ssm-agent[2130]: 2026-03-14 00:22:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:22:38.348494 sshd[2184]: Accepted publickey for core from 68.220.241.50 port 55750 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:38.351045 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:38.364482 systemd-logind[1952]: New session 1 of user core. Mar 14 00:22:38.365284 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:22:38.373034 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:22:38.388017 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:22:38.395650 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:22:38.402504 (systemd)[2214]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:22:38.534028 systemd[2214]: Queued start job for default target default.target. Mar 14 00:22:38.543019 systemd[2214]: Created slice app.slice - User Application Slice. Mar 14 00:22:38.543064 systemd[2214]: Reached target paths.target - Paths. Mar 14 00:22:38.543085 systemd[2214]: Reached target timers.target - Timers. Mar 14 00:22:38.544672 systemd[2214]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:22:38.557574 systemd[2214]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:22:38.557728 systemd[2214]: Reached target sockets.target - Sockets. Mar 14 00:22:38.557749 systemd[2214]: Reached target basic.target - Basic System. Mar 14 00:22:38.557800 systemd[2214]: Reached target default.target - Main User Target. Mar 14 00:22:38.557839 systemd[2214]: Startup finished in 147ms. Mar 14 00:22:38.558085 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:22:38.561717 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:22:38.929697 systemd[1]: Started sshd@1-172.31.20.55:22-68.220.241.50:55754.service - OpenSSH per-connection server daemon (68.220.241.50:55754). Mar 14 00:22:39.173040 kubelet[2203]: E0314 00:22:39.172985 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:39.175579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:39.175789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:22:39.176277 systemd[1]: kubelet.service: Consumed 1.065s CPU time. Mar 14 00:22:39.419645 sshd[2225]: Accepted publickey for core from 68.220.241.50 port 55754 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:39.420283 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:39.425660 systemd-logind[1952]: New session 2 of user core. Mar 14 00:22:39.435551 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:22:39.774701 sshd[2225]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:39.778160 systemd[1]: sshd@1-172.31.20.55:22-68.220.241.50:55754.service: Deactivated successfully. Mar 14 00:22:39.780095 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:22:39.781677 systemd-logind[1952]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:22:39.782769 systemd-logind[1952]: Removed session 2. Mar 14 00:22:39.860574 systemd[1]: Started sshd@2-172.31.20.55:22-68.220.241.50:55762.service - OpenSSH per-connection server daemon (68.220.241.50:55762). Mar 14 00:22:40.341714 sshd[2234]: Accepted publickey for core from 68.220.241.50 port 55762 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:40.343173 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:40.347490 systemd-logind[1952]: New session 3 of user core. Mar 14 00:22:40.354531 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:22:40.685809 sshd[2234]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:40.689985 systemd-logind[1952]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:22:40.690845 systemd[1]: sshd@2-172.31.20.55:22-68.220.241.50:55762.service: Deactivated successfully. Mar 14 00:22:40.693042 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:22:40.694113 systemd-logind[1952]: Removed session 3. Mar 14 00:22:40.771547 systemd[1]: Started sshd@3-172.31.20.55:22-68.220.241.50:55772.service - OpenSSH per-connection server daemon (68.220.241.50:55772). Mar 14 00:22:41.257005 sshd[2241]: Accepted publickey for core from 68.220.241.50 port 55772 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:41.258585 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:41.263597 systemd-logind[1952]: New session 4 of user core. Mar 14 00:22:41.273551 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:22:41.605571 sshd[2241]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:41.609267 systemd[1]: sshd@3-172.31.20.55:22-68.220.241.50:55772.service: Deactivated successfully. Mar 14 00:22:41.611123 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:22:41.612893 systemd-logind[1952]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:22:41.614406 systemd-logind[1952]: Removed session 4. Mar 14 00:22:42.085151 systemd-resolved[1904]: Clock change detected. Flushing caches. Mar 14 00:22:42.136337 systemd[1]: Started sshd@4-172.31.20.55:22-68.220.241.50:55784.service - OpenSSH per-connection server daemon (68.220.241.50:55784). Mar 14 00:22:42.621506 sshd[2248]: Accepted publickey for core from 68.220.241.50 port 55784 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:42.623025 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:42.628696 systemd-logind[1952]: New session 5 of user core. Mar 14 00:22:42.634025 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:22:42.945247 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:22:42.945655 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:42.958003 sudo[2251]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:43.036665 sshd[2248]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:43.041132 systemd-logind[1952]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:22:43.042211 systemd[1]: sshd@4-172.31.20.55:22-68.220.241.50:55784.service: Deactivated successfully. Mar 14 00:22:43.044296 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:22:43.045279 systemd-logind[1952]: Removed session 5. Mar 14 00:22:43.126145 systemd[1]: Started sshd@5-172.31.20.55:22-68.220.241.50:38856.service - OpenSSH per-connection server daemon (68.220.241.50:38856). Mar 14 00:22:43.603125 sshd[2256]: Accepted publickey for core from 68.220.241.50 port 38856 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:43.603982 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:43.608915 systemd-logind[1952]: New session 6 of user core. Mar 14 00:22:43.620136 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:22:43.875123 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:22:43.875510 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:43.879600 sudo[2260]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:43.885159 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:22:43.885536 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:43.906199 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:43.908134 auditctl[2263]: No rules Mar 14 00:22:43.908529 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:22:43.908745 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:43.915258 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:43.940703 augenrules[2281]: No rules Mar 14 00:22:43.942101 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:43.944124 sudo[2259]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:44.020582 sshd[2256]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:44.025829 systemd-logind[1952]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:22:44.025908 systemd[1]: sshd@5-172.31.20.55:22-68.220.241.50:38856.service: Deactivated successfully. Mar 14 00:22:44.028204 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:22:44.029161 systemd-logind[1952]: Removed session 6. Mar 14 00:22:44.108006 systemd[1]: Started sshd@6-172.31.20.55:22-68.220.241.50:38870.service - OpenSSH per-connection server daemon (68.220.241.50:38870). Mar 14 00:22:44.604706 sshd[2289]: Accepted publickey for core from 68.220.241.50 port 38870 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:44.606206 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:44.611387 systemd-logind[1952]: New session 7 of user core. Mar 14 00:22:44.628171 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:22:44.881322 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:22:44.881717 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:45.387168 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:22:45.388669 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:22:45.952591 dockerd[2308]: time="2026-03-14T00:22:45.952526438Z" level=info msg="Starting up" Mar 14 00:22:46.131671 dockerd[2308]: time="2026-03-14T00:22:46.131586507Z" level=info msg="Loading containers: start." Mar 14 00:22:46.262832 kernel: Initializing XFRM netlink socket Mar 14 00:22:46.302475 (udev-worker)[2334]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:46.354953 systemd-networkd[1794]: docker0: Link UP Mar 14 00:22:46.378388 dockerd[2308]: time="2026-03-14T00:22:46.378347699Z" level=info msg="Loading containers: done." Mar 14 00:22:46.408322 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4086666495-merged.mount: Deactivated successfully. Mar 14 00:22:46.415747 dockerd[2308]: time="2026-03-14T00:22:46.415700662Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:22:46.416089 dockerd[2308]: time="2026-03-14T00:22:46.415942078Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:22:46.416164 dockerd[2308]: time="2026-03-14T00:22:46.416098871Z" level=info msg="Daemon has completed initialization" Mar 14 00:22:46.464628 dockerd[2308]: time="2026-03-14T00:22:46.464566968Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:22:46.465085 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:22:47.253734 containerd[1977]: time="2026-03-14T00:22:47.253693880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:22:47.808644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314646929.mount: Deactivated successfully. Mar 14 00:22:49.639949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:49.649584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:49.658860 containerd[1977]: time="2026-03-14T00:22:49.658496332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:49.660744 containerd[1977]: time="2026-03-14T00:22:49.660691705Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 14 00:22:49.663842 containerd[1977]: time="2026-03-14T00:22:49.662370941Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:49.671869 containerd[1977]: time="2026-03-14T00:22:49.671799652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:49.674027 containerd[1977]: time="2026-03-14T00:22:49.673982866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.42024333s" Mar 14 00:22:49.674198 containerd[1977]: time="2026-03-14T00:22:49.674176340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 14 00:22:49.675017 containerd[1977]: time="2026-03-14T00:22:49.674977246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:22:49.848607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:49.859274 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:49.907245 kubelet[2514]: E0314 00:22:49.907099 2514 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:49.911309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:49.911517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:22:51.599310 containerd[1977]: time="2026-03-14T00:22:51.599255358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.611438 containerd[1977]: time="2026-03-14T00:22:51.611368658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 14 00:22:51.619519 containerd[1977]: time="2026-03-14T00:22:51.619428199Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.630712 containerd[1977]: time="2026-03-14T00:22:51.630372846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:51.632091 containerd[1977]: time="2026-03-14T00:22:51.631904889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.956783368s" Mar 14 00:22:51.632091 containerd[1977]: time="2026-03-14T00:22:51.631953982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 14 00:22:51.633048 containerd[1977]: time="2026-03-14T00:22:51.632599429Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:22:53.116634 containerd[1977]: time="2026-03-14T00:22:53.116579680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:53.118033 containerd[1977]: time="2026-03-14T00:22:53.117979733Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 14 00:22:53.119022 containerd[1977]: time="2026-03-14T00:22:53.118964325Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:53.123828 containerd[1977]: time="2026-03-14T00:22:53.122772227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:53.128239 containerd[1977]: time="2026-03-14T00:22:53.128202972Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.495572762s" Mar 14 00:22:53.128374 containerd[1977]: time="2026-03-14T00:22:53.128356365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 14 00:22:53.128927 containerd[1977]: time="2026-03-14T00:22:53.128896127Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:22:54.180628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406671432.mount: Deactivated successfully. Mar 14 00:22:54.572585 containerd[1977]: time="2026-03-14T00:22:54.572458806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:54.573767 containerd[1977]: time="2026-03-14T00:22:54.573558988Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 14 00:22:54.574877 containerd[1977]: time="2026-03-14T00:22:54.574844173Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:54.577320 containerd[1977]: time="2026-03-14T00:22:54.577263962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:54.578219 containerd[1977]: time="2026-03-14T00:22:54.578048856Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.449119585s" Mar 14 00:22:54.578219 containerd[1977]: time="2026-03-14T00:22:54.578094640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 14 00:22:54.578990 containerd[1977]: time="2026-03-14T00:22:54.578965808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:22:55.104601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053199579.mount: Deactivated successfully. Mar 14 00:22:56.590222 containerd[1977]: time="2026-03-14T00:22:56.590165982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:56.592377 containerd[1977]: time="2026-03-14T00:22:56.591865879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 14 00:22:56.593423 containerd[1977]: time="2026-03-14T00:22:56.593286249Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:56.599029 containerd[1977]: time="2026-03-14T00:22:56.598970267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:56.600883 containerd[1977]: time="2026-03-14T00:22:56.600842949Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.0217554s" Mar 14 00:22:56.600963 containerd[1977]: time="2026-03-14T00:22:56.600883976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 14 00:22:56.601788 containerd[1977]: time="2026-03-14T00:22:56.601605150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:22:57.086169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260968073.mount: Deactivated successfully. Mar 14 00:22:57.091348 containerd[1977]: time="2026-03-14T00:22:57.091307533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:57.092314 containerd[1977]: time="2026-03-14T00:22:57.092265672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:22:57.093098 containerd[1977]: time="2026-03-14T00:22:57.093040551Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:57.095179 containerd[1977]: time="2026-03-14T00:22:57.095132522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:57.096645 containerd[1977]: time="2026-03-14T00:22:57.095982570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 494.341281ms" Mar 14 00:22:57.096645 containerd[1977]: time="2026-03-14T00:22:57.096021053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:22:57.096645 containerd[1977]: time="2026-03-14T00:22:57.096541507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:22:57.608582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743266150.mount: Deactivated successfully. Mar 14 00:22:58.701492 containerd[1977]: time="2026-03-14T00:22:58.701435866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:58.703412 containerd[1977]: time="2026-03-14T00:22:58.703356276Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 14 00:22:58.706103 containerd[1977]: time="2026-03-14T00:22:58.705660511Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:58.710003 containerd[1977]: time="2026-03-14T00:22:58.709963422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:58.711372 containerd[1977]: time="2026-03-14T00:22:58.711217079Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.614645914s" Mar 14 00:22:58.711372 containerd[1977]: time="2026-03-14T00:22:58.711259963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 14 00:23:00.142684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:23:00.152118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:00.498980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:00.509484 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:00.566838 kubelet[2683]: E0314 00:23:00.564841 2683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:00.568219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:00.568424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:02.676088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:02.685645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:02.730856 systemd[1]: Reloading requested from client PID 2697 ('systemctl') (unit session-7.scope)... Mar 14 00:23:02.730878 systemd[1]: Reloading... Mar 14 00:23:02.890172 zram_generator::config[2738]: No configuration found. Mar 14 00:23:03.028045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:03.115476 systemd[1]: Reloading finished in 383 ms. Mar 14 00:23:03.173508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:03.180104 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:03.182126 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:23:03.182376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:03.188358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:03.410402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:03.424329 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:03.486799 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:03.486799 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:03.487309 kubelet[2803]: I0314 00:23:03.486867 2803 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:03.992763 kubelet[2803]: I0314 00:23:03.992719 2803 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:23:03.992763 kubelet[2803]: I0314 00:23:03.992747 2803 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:03.993764 kubelet[2803]: I0314 00:23:03.993736 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:23:03.993764 kubelet[2803]: I0314 00:23:03.993764 2803 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:03.994097 kubelet[2803]: I0314 00:23:03.994068 2803 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:04.004346 kubelet[2803]: I0314 00:23:04.004301 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:04.009522 kubelet[2803]: E0314 00:23:04.009143 2803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:04.009522 kubelet[2803]: I0314 00:23:04.009206 2803 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:04.011249 kubelet[2803]: E0314 00:23:04.011191 2803 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:23:04.012033 kubelet[2803]: I0314 00:23:04.012005 2803 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:23:04.013061 kubelet[2803]: I0314 00:23:04.013021 2803 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:04.013265 kubelet[2803]: I0314 00:23:04.013068 2803 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:23:04.013416 kubelet[2803]: I0314 00:23:04.013266 2803 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:04.013416 kubelet[2803]: I0314 00:23:04.013295 2803 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:23:04.013633 kubelet[2803]: I0314 00:23:04.013614 2803 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:23:04.015239 kubelet[2803]: I0314 00:23:04.015220 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:04.015413 kubelet[2803]: I0314 00:23:04.015401 2803 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:23:04.015506 kubelet[2803]: I0314 00:23:04.015420 2803 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:04.015506 kubelet[2803]: I0314 00:23:04.015450 2803 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:23:04.015506 kubelet[2803]: I0314 00:23:04.015468 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:04.021833 kubelet[2803]: E0314 00:23:04.020726 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-55&limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:04.022404 kubelet[2803]: E0314 00:23:04.022373 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:04.027412 kubelet[2803]: I0314 00:23:04.027378 2803 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:04.028330 kubelet[2803]: I0314 00:23:04.028301 2803 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:04.028419 kubelet[2803]: I0314 00:23:04.028350 2803 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:23:04.028461 kubelet[2803]: W0314 00:23:04.028450 2803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:23:04.032091 kubelet[2803]: I0314 00:23:04.032068 2803 server.go:1262] "Started kubelet" Mar 14 00:23:04.033847 kubelet[2803]: I0314 00:23:04.033611 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:04.044282 kubelet[2803]: I0314 00:23:04.044230 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:04.045631 kubelet[2803]: I0314 00:23:04.045609 2803 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:23:04.046170 kubelet[2803]: I0314 00:23:04.045779 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:04.046256 kubelet[2803]: I0314 00:23:04.046205 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:23:04.046535 kubelet[2803]: I0314 00:23:04.046516 2803 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:04.046762 kubelet[2803]: I0314 00:23:04.046747 2803 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:23:04.047132 kubelet[2803]: E0314 00:23:04.047111 2803 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-55\" not found" Mar 14 00:23:04.050325 kubelet[2803]: I0314 00:23:04.050284 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:04.050792 kubelet[2803]: E0314 00:23:04.042409 2803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.55:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-55.189c8d5d0aeeca82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-55,UID:ip-172-31-20-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-55,},FirstTimestamp:2026-03-14 00:23:04.032029314 +0000 UTC m=+0.599539544,LastTimestamp:2026-03-14 00:23:04.032029314 +0000 UTC m=+0.599539544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-55,}" Mar 14 00:23:04.051013 kubelet[2803]: I0314 00:23:04.051000 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:23:04.051132 kubelet[2803]: I0314 00:23:04.051122 2803 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:23:04.053572 kubelet[2803]: E0314 00:23:04.053530 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-55?timeout=10s\": dial tcp 172.31.20.55:6443: connect: connection refused" interval="200ms" Mar 14 00:23:04.056977 kubelet[2803]: E0314 00:23:04.056941 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:04.057171 kubelet[2803]: E0314 00:23:04.057147 2803 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:04.057315 kubelet[2803]: I0314 00:23:04.057294 2803 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:04.057315 kubelet[2803]: I0314 00:23:04.057310 2803 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:04.057409 kubelet[2803]: I0314 00:23:04.057389 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:04.067174 kubelet[2803]: I0314 00:23:04.066913 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:04.069677 kubelet[2803]: I0314 00:23:04.069640 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:04.069677 kubelet[2803]: I0314 00:23:04.069664 2803 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:23:04.069942 kubelet[2803]: I0314 00:23:04.069695 2803 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:23:04.069942 kubelet[2803]: E0314 00:23:04.069744 2803 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:04.077772 kubelet[2803]: E0314 00:23:04.077261 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:04.093476 kubelet[2803]: I0314 00:23:04.093445 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:04.094214 kubelet[2803]: I0314 00:23:04.093953 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:04.094214 kubelet[2803]: I0314 00:23:04.093983 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:04.096581 kubelet[2803]: I0314 00:23:04.096356 2803 policy_none.go:49] "None policy: Start" Mar 14 00:23:04.096581 kubelet[2803]: I0314 00:23:04.096375 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:23:04.096581 kubelet[2803]: I0314 00:23:04.096386 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:23:04.098021 kubelet[2803]: I0314 00:23:04.098007 2803 policy_none.go:47] "Start" Mar 14 00:23:04.102620 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:23:04.113730 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:23:04.117778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:23:04.129285 kubelet[2803]: E0314 00:23:04.128992 2803 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:04.129285 kubelet[2803]: I0314 00:23:04.129267 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:04.129842 kubelet[2803]: I0314 00:23:04.129280 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:04.129842 kubelet[2803]: I0314 00:23:04.129544 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:04.132377 kubelet[2803]: E0314 00:23:04.132342 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:04.132479 kubelet[2803]: E0314 00:23:04.132392 2803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-55\" not found" Mar 14 00:23:04.184976 systemd[1]: Created slice kubepods-burstable-podf90de8bd54251cf2068245b6830c2114.slice - libcontainer container kubepods-burstable-podf90de8bd54251cf2068245b6830c2114.slice. Mar 14 00:23:04.194920 kubelet[2803]: E0314 00:23:04.194876 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:04.198825 systemd[1]: Created slice kubepods-burstable-podc9a7dfe92317d3b2f5be62bebb7d42de.slice - libcontainer container kubepods-burstable-podc9a7dfe92317d3b2f5be62bebb7d42de.slice. Mar 14 00:23:04.201414 kubelet[2803]: E0314 00:23:04.201219 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:04.213322 systemd[1]: Created slice kubepods-burstable-pod81d4c9f8ae34b52ed0ee72c6dcba8318.slice - libcontainer container kubepods-burstable-pod81d4c9f8ae34b52ed0ee72c6dcba8318.slice. Mar 14 00:23:04.215448 kubelet[2803]: E0314 00:23:04.215420 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:04.231185 kubelet[2803]: I0314 00:23:04.230922 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:04.231319 kubelet[2803]: E0314 00:23:04.231250 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.55:6443/api/v1/nodes\": dial tcp 172.31.20.55:6443: connect: connection refused" node="ip-172-31-20-55" Mar 14 00:23:04.254549 kubelet[2803]: E0314 00:23:04.254422 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-55?timeout=10s\": dial tcp 172.31.20.55:6443: connect: connection refused" interval="400ms" Mar 14 00:23:04.352055 kubelet[2803]: I0314 00:23:04.352002 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:04.352055 kubelet[2803]: I0314 00:23:04.352061 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:04.352055 kubelet[2803]: I0314 00:23:04.352098 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d4c9f8ae34b52ed0ee72c6dcba8318-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-55\" (UID: \"81d4c9f8ae34b52ed0ee72c6dcba8318\") " pod="kube-system/kube-scheduler-ip-172-31-20-55" Mar 14 00:23:04.352390 kubelet[2803]: I0314 00:23:04.352139 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-ca-certs\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:04.352390 kubelet[2803]: I0314 00:23:04.352165 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:04.352390 kubelet[2803]: I0314 00:23:04.352211 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:04.352390 kubelet[2803]: I0314 00:23:04.352233 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:04.352390 kubelet[2803]: I0314 00:23:04.352256 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:04.352562 kubelet[2803]: I0314 00:23:04.352275 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:04.433702 kubelet[2803]: I0314 00:23:04.433671 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:04.434043 kubelet[2803]: E0314 00:23:04.434012 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.55:6443/api/v1/nodes\": dial tcp 172.31.20.55:6443: connect: connection refused" node="ip-172-31-20-55" Mar 14 00:23:04.499488 containerd[1977]: time="2026-03-14T00:23:04.499444918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-55,Uid:f90de8bd54251cf2068245b6830c2114,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:04.504487 containerd[1977]: time="2026-03-14T00:23:04.504436033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-55,Uid:c9a7dfe92317d3b2f5be62bebb7d42de,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:04.519621 containerd[1977]: time="2026-03-14T00:23:04.519288437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-55,Uid:81d4c9f8ae34b52ed0ee72c6dcba8318,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:04.655372 kubelet[2803]: E0314 00:23:04.655326 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-55?timeout=10s\": dial tcp 172.31.20.55:6443: connect: connection refused" interval="800ms" Mar 14 00:23:04.835853 kubelet[2803]: I0314 00:23:04.835608 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:04.836118 kubelet[2803]: E0314 00:23:04.836085 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.55:6443/api/v1/nodes\": dial tcp 172.31.20.55:6443: connect: connection refused" node="ip-172-31-20-55" Mar 14 00:23:04.948287 kubelet[2803]: E0314 00:23:04.948248 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:04.973571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91498012.mount: Deactivated successfully. Mar 14 00:23:04.979992 containerd[1977]: time="2026-03-14T00:23:04.979944870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:04.983175 containerd[1977]: time="2026-03-14T00:23:04.983099911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:23:04.984043 containerd[1977]: time="2026-03-14T00:23:04.984004235Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:04.984915 containerd[1977]: time="2026-03-14T00:23:04.984882921Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:04.986319 containerd[1977]: time="2026-03-14T00:23:04.986284202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:04.987201 containerd[1977]: time="2026-03-14T00:23:04.987155203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:04.988102 containerd[1977]: time="2026-03-14T00:23:04.988030155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:04.990271 containerd[1977]: time="2026-03-14T00:23:04.990173881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:04.993827 containerd[1977]: time="2026-03-14T00:23:04.991651798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.105967ms" Mar 14 00:23:04.993827 containerd[1977]: time="2026-03-14T00:23:04.992768259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.239621ms" Mar 14 00:23:04.994957 containerd[1977]: time="2026-03-14T00:23:04.994923108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.551352ms" Mar 14 00:23:05.061532 kubelet[2803]: E0314 00:23:05.061493 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-55&limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:05.245836 kubelet[2803]: E0314 00:23:05.244343 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:05.276862 containerd[1977]: time="2026-03-14T00:23:05.274743304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:05.276862 containerd[1977]: time="2026-03-14T00:23:05.274826355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:05.276862 containerd[1977]: time="2026-03-14T00:23:05.274868758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.276862 containerd[1977]: time="2026-03-14T00:23:05.274977891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.279632 containerd[1977]: time="2026-03-14T00:23:05.279475555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:05.279785 containerd[1977]: time="2026-03-14T00:23:05.279685018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:05.279876 containerd[1977]: time="2026-03-14T00:23:05.279835096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.283147 containerd[1977]: time="2026-03-14T00:23:05.281935349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:05.283147 containerd[1977]: time="2026-03-14T00:23:05.282018307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:05.283147 containerd[1977]: time="2026-03-14T00:23:05.282042774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.283147 containerd[1977]: time="2026-03-14T00:23:05.282156374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.285865 containerd[1977]: time="2026-03-14T00:23:05.283854195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:05.311022 systemd[1]: Started cri-containerd-cb547d8d84d02e7d0e25f4d3b81f12ad4e9b7d45503bbdf997e5c13228070925.scope - libcontainer container cb547d8d84d02e7d0e25f4d3b81f12ad4e9b7d45503bbdf997e5c13228070925. Mar 14 00:23:05.335067 systemd[1]: Started cri-containerd-22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1.scope - libcontainer container 22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1. Mar 14 00:23:05.340537 systemd[1]: Started cri-containerd-def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039.scope - libcontainer container def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039. Mar 14 00:23:05.418500 kubelet[2803]: E0314 00:23:05.418439 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:05.425327 containerd[1977]: time="2026-03-14T00:23:05.425276917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-55,Uid:c9a7dfe92317d3b2f5be62bebb7d42de,Namespace:kube-system,Attempt:0,} returns sandbox id \"22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1\"" Mar 14 00:23:05.444556 containerd[1977]: time="2026-03-14T00:23:05.444415571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-55,Uid:f90de8bd54251cf2068245b6830c2114,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb547d8d84d02e7d0e25f4d3b81f12ad4e9b7d45503bbdf997e5c13228070925\"" Mar 14 00:23:05.451297 containerd[1977]: time="2026-03-14T00:23:05.451121529Z" level=info msg="CreateContainer within sandbox \"22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:23:05.452340 containerd[1977]: time="2026-03-14T00:23:05.452196191Z" level=info msg="CreateContainer within sandbox \"cb547d8d84d02e7d0e25f4d3b81f12ad4e9b7d45503bbdf997e5c13228070925\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:23:05.456858 kubelet[2803]: E0314 00:23:05.456802 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-55?timeout=10s\": dial tcp 172.31.20.55:6443: connect: connection refused" interval="1.6s" Mar 14 00:23:05.466018 containerd[1977]: time="2026-03-14T00:23:05.465966966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-55,Uid:81d4c9f8ae34b52ed0ee72c6dcba8318,Namespace:kube-system,Attempt:0,} returns sandbox id \"def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039\"" Mar 14 00:23:05.471947 containerd[1977]: time="2026-03-14T00:23:05.471891528Z" level=info msg="CreateContainer within sandbox \"def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:23:05.508960 containerd[1977]: time="2026-03-14T00:23:05.508848964Z" level=info msg="CreateContainer within sandbox \"def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b\"" Mar 14 00:23:05.511013 containerd[1977]: time="2026-03-14T00:23:05.510977460Z" level=info msg="CreateContainer within sandbox \"22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61\"" Mar 14 00:23:05.511238 containerd[1977]: time="2026-03-14T00:23:05.511210770Z" level=info msg="StartContainer for \"f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b\"" Mar 14 00:23:05.511894 containerd[1977]: time="2026-03-14T00:23:05.511871148Z" level=info msg="CreateContainer within sandbox \"cb547d8d84d02e7d0e25f4d3b81f12ad4e9b7d45503bbdf997e5c13228070925\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6cf73aa589359fd0eee72c91b4abec2e4f77acb84e53069683781e20323f125\"" Mar 14 00:23:05.512551 containerd[1977]: time="2026-03-14T00:23:05.512365785Z" level=info msg="StartContainer for \"82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61\"" Mar 14 00:23:05.523836 containerd[1977]: time="2026-03-14T00:23:05.523344955Z" level=info msg="StartContainer for \"c6cf73aa589359fd0eee72c91b4abec2e4f77acb84e53069683781e20323f125\"" Mar 14 00:23:05.566351 systemd[1]: Started cri-containerd-82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61.scope - libcontainer container 82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61. Mar 14 00:23:05.576552 systemd[1]: Started cri-containerd-f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b.scope - libcontainer container f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b. Mar 14 00:23:05.586480 systemd[1]: Started cri-containerd-c6cf73aa589359fd0eee72c91b4abec2e4f77acb84e53069683781e20323f125.scope - libcontainer container c6cf73aa589359fd0eee72c91b4abec2e4f77acb84e53069683781e20323f125. Mar 14 00:23:05.641463 kubelet[2803]: I0314 00:23:05.641056 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:05.641463 kubelet[2803]: E0314 00:23:05.641412 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.55:6443/api/v1/nodes\": dial tcp 172.31.20.55:6443: connect: connection refused" node="ip-172-31-20-55" Mar 14 00:23:05.668174 containerd[1977]: time="2026-03-14T00:23:05.668047101Z" level=info msg="StartContainer for \"c6cf73aa589359fd0eee72c91b4abec2e4f77acb84e53069683781e20323f125\" returns successfully" Mar 14 00:23:05.674564 containerd[1977]: time="2026-03-14T00:23:05.674275442Z" level=info msg="StartContainer for \"82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61\" returns successfully" Mar 14 00:23:05.701937 containerd[1977]: time="2026-03-14T00:23:05.701891939Z" level=info msg="StartContainer for \"f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b\" returns successfully" Mar 14 00:23:05.947869 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:23:06.101407 kubelet[2803]: E0314 00:23:06.101094 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:06.106546 kubelet[2803]: E0314 00:23:06.106523 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:06.107638 kubelet[2803]: E0314 00:23:06.107494 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:07.111088 kubelet[2803]: E0314 00:23:07.110901 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:07.112413 kubelet[2803]: E0314 00:23:07.112270 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:07.244861 kubelet[2803]: I0314 00:23:07.244589 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:08.111940 kubelet[2803]: E0314 00:23:08.110833 2803 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-55\" not found" node="ip-172-31-20-55" Mar 14 00:23:09.298505 kubelet[2803]: I0314 00:23:09.298383 2803 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-55" Mar 14 00:23:09.307020 kubelet[2803]: E0314 00:23:09.306788 2803 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-55.189c8d5d0aeeca82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-55,UID:ip-172-31-20-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-55,},FirstTimestamp:2026-03-14 00:23:04.032029314 +0000 UTC m=+0.599539544,LastTimestamp:2026-03-14 00:23:04.032029314 +0000 UTC m=+0.599539544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-55,}" Mar 14 00:23:09.347963 kubelet[2803]: I0314 00:23:09.347587 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:09.381487 kubelet[2803]: E0314 00:23:09.381455 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:09.381949 kubelet[2803]: I0314 00:23:09.381700 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:09.384977 kubelet[2803]: E0314 00:23:09.384772 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:09.384977 kubelet[2803]: I0314 00:23:09.384799 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-55" Mar 14 00:23:09.389234 kubelet[2803]: E0314 00:23:09.389203 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-55" Mar 14 00:23:10.026441 kubelet[2803]: I0314 00:23:10.026147 2803 apiserver.go:52] "Watching apiserver" Mar 14 00:23:10.051279 kubelet[2803]: I0314 00:23:10.051237 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:23:10.056859 kubelet[2803]: I0314 00:23:10.056141 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:11.417233 systemd[1]: Reloading requested from client PID 3085 ('systemctl') (unit session-7.scope)... Mar 14 00:23:11.417251 systemd[1]: Reloading... Mar 14 00:23:11.553844 zram_generator::config[3128]: No configuration found. Mar 14 00:23:11.679258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:11.785594 systemd[1]: Reloading finished in 367 ms. Mar 14 00:23:11.834413 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:11.846465 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:23:11.846755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:11.846845 systemd[1]: kubelet.service: Consumed 1.059s CPU time, 122.2M memory peak, 0B memory swap peak. Mar 14 00:23:11.855220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:12.333436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:12.346277 (kubelet)[3185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:12.424018 kubelet[3185]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:12.424018 kubelet[3185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:12.425828 kubelet[3185]: I0314 00:23:12.424506 3185 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:12.436632 kubelet[3185]: I0314 00:23:12.436600 3185 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:23:12.436771 kubelet[3185]: I0314 00:23:12.436758 3185 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:12.437685 kubelet[3185]: I0314 00:23:12.437657 3185 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:23:12.437685 kubelet[3185]: I0314 00:23:12.437681 3185 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:12.439052 kubelet[3185]: I0314 00:23:12.438078 3185 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:12.440137 kubelet[3185]: I0314 00:23:12.440107 3185 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:23:12.443112 kubelet[3185]: I0314 00:23:12.443084 3185 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:12.448726 kubelet[3185]: E0314 00:23:12.448698 3185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:12.448905 kubelet[3185]: I0314 00:23:12.448874 3185 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:12.451959 kubelet[3185]: I0314 00:23:12.451588 3185 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:23:12.453477 kubelet[3185]: I0314 00:23:12.453440 3185 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:12.453665 kubelet[3185]: I0314 00:23:12.453477 3185 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:23:12.453798 kubelet[3185]: I0314 00:23:12.453666 3185 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:12.453798 kubelet[3185]: I0314 00:23:12.453682 3185 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:23:12.453798 kubelet[3185]: I0314 00:23:12.453721 3185 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:23:12.453977 kubelet[3185]: I0314 00:23:12.453956 3185 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:12.456122 kubelet[3185]: I0314 00:23:12.454216 3185 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:23:12.456122 kubelet[3185]: I0314 00:23:12.454244 3185 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:12.456122 kubelet[3185]: I0314 00:23:12.454272 3185 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:23:12.456122 kubelet[3185]: I0314 00:23:12.454285 3185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:12.456122 kubelet[3185]: I0314 00:23:12.455556 3185 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:12.456521 kubelet[3185]: I0314 00:23:12.456503 3185 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:12.456576 kubelet[3185]: I0314 00:23:12.456548 3185 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:23:12.469001 kubelet[3185]: I0314 00:23:12.468977 3185 server.go:1262] "Started kubelet" Mar 14 00:23:12.472150 kubelet[3185]: I0314 00:23:12.472096 3185 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:12.481296 kubelet[3185]: I0314 00:23:12.480795 3185 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:12.481430 kubelet[3185]: I0314 00:23:12.481310 3185 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:23:12.481549 kubelet[3185]: I0314 00:23:12.481498 3185 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:12.487583 kubelet[3185]: I0314 00:23:12.486482 3185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:12.507725 kubelet[3185]: I0314 00:23:12.502331 3185 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:23:12.507725 kubelet[3185]: I0314 00:23:12.506136 3185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:12.510294 kubelet[3185]: I0314 00:23:12.509136 3185 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:12.510946 kubelet[3185]: I0314 00:23:12.510892 3185 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:23:12.511246 kubelet[3185]: E0314 00:23:12.511217 3185 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-55\" not found" Mar 14 00:23:12.513670 kubelet[3185]: I0314 00:23:12.513082 3185 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:23:12.521244 kubelet[3185]: I0314 00:23:12.520760 3185 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:23:12.526590 kubelet[3185]: E0314 00:23:12.526500 3185 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:12.527361 kubelet[3185]: I0314 00:23:12.526548 3185 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:12.527636 kubelet[3185]: I0314 00:23:12.527594 3185 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:12.535513 kubelet[3185]: I0314 00:23:12.534567 3185 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:12.554264 kubelet[3185]: I0314 00:23:12.553296 3185 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:12.554417 kubelet[3185]: I0314 00:23:12.554404 3185 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:23:12.554556 kubelet[3185]: I0314 00:23:12.554526 3185 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:23:12.554705 kubelet[3185]: E0314 00:23:12.554672 3185 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:12.604853 kubelet[3185]: I0314 00:23:12.604545 3185 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.604564 3185 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605617 3185 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605773 3185 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605786 3185 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605827 3185 policy_none.go:49] "None policy: Start" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605840 3185 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605852 3185 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605974 3185 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:23:12.606163 kubelet[3185]: I0314 00:23:12.605983 3185 policy_none.go:47] "Start" Mar 14 00:23:12.614442 kubelet[3185]: E0314 00:23:12.613549 3185 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:12.614442 kubelet[3185]: I0314 00:23:12.613743 3185 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:12.614442 kubelet[3185]: I0314 00:23:12.613756 3185 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:12.614442 kubelet[3185]: I0314 00:23:12.614438 3185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:12.618425 kubelet[3185]: E0314 00:23:12.618397 3185 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:12.656416 kubelet[3185]: I0314 00:23:12.656029 3185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-55" Mar 14 00:23:12.656416 kubelet[3185]: I0314 00:23:12.656110 3185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:12.657791 kubelet[3185]: I0314 00:23:12.657170 3185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.666939 kubelet[3185]: E0314 00:23:12.666900 3185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-55\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:12.721772 kubelet[3185]: I0314 00:23:12.721454 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-ca-certs\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:12.721772 kubelet[3185]: I0314 00:23:12.721503 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.721772 kubelet[3185]: I0314 00:23:12.721533 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.721772 kubelet[3185]: I0314 00:23:12.721554 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:12.721772 kubelet[3185]: I0314 00:23:12.721583 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f90de8bd54251cf2068245b6830c2114-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-55\" (UID: \"f90de8bd54251cf2068245b6830c2114\") " pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:12.722285 kubelet[3185]: I0314 00:23:12.721605 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.722285 kubelet[3185]: I0314 00:23:12.721625 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.722285 kubelet[3185]: I0314 00:23:12.721644 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9a7dfe92317d3b2f5be62bebb7d42de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-55\" (UID: \"c9a7dfe92317d3b2f5be62bebb7d42de\") " pod="kube-system/kube-controller-manager-ip-172-31-20-55" Mar 14 00:23:12.722285 kubelet[3185]: I0314 00:23:12.721668 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d4c9f8ae34b52ed0ee72c6dcba8318-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-55\" (UID: \"81d4c9f8ae34b52ed0ee72c6dcba8318\") " pod="kube-system/kube-scheduler-ip-172-31-20-55" Mar 14 00:23:12.739062 kubelet[3185]: I0314 00:23:12.739019 3185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-55" Mar 14 00:23:12.753191 kubelet[3185]: I0314 00:23:12.753147 3185 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-55" Mar 14 00:23:12.753414 kubelet[3185]: I0314 00:23:12.753235 3185 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-55" Mar 14 00:23:13.456065 kubelet[3185]: I0314 00:23:13.455822 3185 apiserver.go:52] "Watching apiserver" Mar 14 00:23:13.514290 kubelet[3185]: I0314 00:23:13.514214 3185 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:23:13.573383 kubelet[3185]: I0314 00:23:13.573312 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-55" podStartSLOduration=1.5732916879999999 podStartE2EDuration="1.573291688s" podCreationTimestamp="2026-03-14 00:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:13.559701391 +0000 UTC m=+1.195898678" watchObservedRunningTime="2026-03-14 00:23:13.573291688 +0000 UTC m=+1.209488973" Mar 14 00:23:13.584825 kubelet[3185]: I0314 00:23:13.584782 3185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:13.595140 kubelet[3185]: I0314 00:23:13.595055 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-55" podStartSLOduration=1.595036763 podStartE2EDuration="1.595036763s" podCreationTimestamp="2026-03-14 00:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:13.57414394 +0000 UTC m=+1.210341223" watchObservedRunningTime="2026-03-14 00:23:13.595036763 +0000 UTC m=+1.231234049" Mar 14 00:23:13.596861 kubelet[3185]: E0314 00:23:13.596508 3185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-55\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-55" Mar 14 00:23:13.617198 kubelet[3185]: I0314 00:23:13.617119 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-55" podStartSLOduration=3.617099408 podStartE2EDuration="3.617099408s" podCreationTimestamp="2026-03-14 00:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:13.595486718 +0000 UTC m=+1.231684004" watchObservedRunningTime="2026-03-14 00:23:13.617099408 +0000 UTC m=+1.253296678" Mar 14 00:23:17.825263 kubelet[3185]: I0314 00:23:17.825220 3185 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:23:17.826054 kubelet[3185]: I0314 00:23:17.825955 3185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:23:17.826117 containerd[1977]: time="2026-03-14T00:23:17.825721362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:23:18.933094 systemd[1]: Created slice kubepods-besteffort-pod58854c66_6972_42d5_abae_365c895e19cc.slice - libcontainer container kubepods-besteffort-pod58854c66_6972_42d5_abae_365c895e19cc.slice. Mar 14 00:23:18.960351 kubelet[3185]: I0314 00:23:18.960313 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58854c66-6972-42d5-abae-365c895e19cc-lib-modules\") pod \"kube-proxy-47bcf\" (UID: \"58854c66-6972-42d5-abae-365c895e19cc\") " pod="kube-system/kube-proxy-47bcf" Mar 14 00:23:18.960351 kubelet[3185]: I0314 00:23:18.960354 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxc2\" (UniqueName: \"kubernetes.io/projected/58854c66-6972-42d5-abae-365c895e19cc-kube-api-access-5bxc2\") pod \"kube-proxy-47bcf\" (UID: \"58854c66-6972-42d5-abae-365c895e19cc\") " pod="kube-system/kube-proxy-47bcf" Mar 14 00:23:18.960351 kubelet[3185]: I0314 00:23:18.960396 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58854c66-6972-42d5-abae-365c895e19cc-xtables-lock\") pod \"kube-proxy-47bcf\" (UID: \"58854c66-6972-42d5-abae-365c895e19cc\") " pod="kube-system/kube-proxy-47bcf" Mar 14 00:23:18.960985 kubelet[3185]: I0314 00:23:18.960420 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58854c66-6972-42d5-abae-365c895e19cc-kube-proxy\") pod \"kube-proxy-47bcf\" (UID: \"58854c66-6972-42d5-abae-365c895e19cc\") " pod="kube-system/kube-proxy-47bcf" Mar 14 00:23:19.055275 systemd[1]: Created slice kubepods-besteffort-podf3665d0d_f2f9_449d_a3a8_13f50ed1960e.slice - libcontainer container kubepods-besteffort-podf3665d0d_f2f9_449d_a3a8_13f50ed1960e.slice. Mar 14 00:23:19.063122 kubelet[3185]: I0314 00:23:19.062978 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3665d0d-f2f9-449d-a3a8-13f50ed1960e-var-lib-calico\") pod \"tigera-operator-5588576f44-b99s8\" (UID: \"f3665d0d-f2f9-449d-a3a8-13f50ed1960e\") " pod="tigera-operator/tigera-operator-5588576f44-b99s8" Mar 14 00:23:19.063580 kubelet[3185]: I0314 00:23:19.063543 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjrj\" (UniqueName: \"kubernetes.io/projected/f3665d0d-f2f9-449d-a3a8-13f50ed1960e-kube-api-access-xxjrj\") pod \"tigera-operator-5588576f44-b99s8\" (UID: \"f3665d0d-f2f9-449d-a3a8-13f50ed1960e\") " pod="tigera-operator/tigera-operator-5588576f44-b99s8" Mar 14 00:23:19.253651 containerd[1977]: time="2026-03-14T00:23:19.253543270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47bcf,Uid:58854c66-6972-42d5-abae-365c895e19cc,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:19.283525 containerd[1977]: time="2026-03-14T00:23:19.283261671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:19.284662 containerd[1977]: time="2026-03-14T00:23:19.284339057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:19.284662 containerd[1977]: time="2026-03-14T00:23:19.284363036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:19.284662 containerd[1977]: time="2026-03-14T00:23:19.284454159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:19.315113 systemd[1]: Started cri-containerd-5f5e1fad4246c8753cc7e661236313042a34ee3e05dedd7a00f0c438e96ca062.scope - libcontainer container 5f5e1fad4246c8753cc7e661236313042a34ee3e05dedd7a00f0c438e96ca062. Mar 14 00:23:19.340895 containerd[1977]: time="2026-03-14T00:23:19.340853344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47bcf,Uid:58854c66-6972-42d5-abae-365c895e19cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5e1fad4246c8753cc7e661236313042a34ee3e05dedd7a00f0c438e96ca062\"" Mar 14 00:23:19.348095 containerd[1977]: time="2026-03-14T00:23:19.347822932Z" level=info msg="CreateContainer within sandbox \"5f5e1fad4246c8753cc7e661236313042a34ee3e05dedd7a00f0c438e96ca062\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:23:19.364356 containerd[1977]: time="2026-03-14T00:23:19.364046925Z" level=info msg="CreateContainer within sandbox \"5f5e1fad4246c8753cc7e661236313042a34ee3e05dedd7a00f0c438e96ca062\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce3bea4e59a25938cf69ceebb83fdf7b6864ae1c95d97536ed594f4ec1449dd4\"" Mar 14 00:23:19.365536 containerd[1977]: time="2026-03-14T00:23:19.365046768Z" level=info msg="StartContainer for \"ce3bea4e59a25938cf69ceebb83fdf7b6864ae1c95d97536ed594f4ec1449dd4\"" Mar 14 00:23:19.378201 containerd[1977]: time="2026-03-14T00:23:19.378061877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-b99s8,Uid:f3665d0d-f2f9-449d-a3a8-13f50ed1960e,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:23:19.405332 systemd[1]: Started cri-containerd-ce3bea4e59a25938cf69ceebb83fdf7b6864ae1c95d97536ed594f4ec1449dd4.scope - libcontainer container ce3bea4e59a25938cf69ceebb83fdf7b6864ae1c95d97536ed594f4ec1449dd4. Mar 14 00:23:19.423696 containerd[1977]: time="2026-03-14T00:23:19.423240021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:19.423696 containerd[1977]: time="2026-03-14T00:23:19.423317013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:19.423696 containerd[1977]: time="2026-03-14T00:23:19.423343656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:19.424392 containerd[1977]: time="2026-03-14T00:23:19.424074021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:19.452066 systemd[1]: Started cri-containerd-e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d.scope - libcontainer container e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d. Mar 14 00:23:19.460017 containerd[1977]: time="2026-03-14T00:23:19.459871380Z" level=info msg="StartContainer for \"ce3bea4e59a25938cf69ceebb83fdf7b6864ae1c95d97536ed594f4ec1449dd4\" returns successfully" Mar 14 00:23:19.527400 containerd[1977]: time="2026-03-14T00:23:19.527234181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-b99s8,Uid:f3665d0d-f2f9-449d-a3a8-13f50ed1960e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d\"" Mar 14 00:23:19.532101 containerd[1977]: time="2026-03-14T00:23:19.532052825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:23:19.638254 kubelet[3185]: I0314 00:23:19.638188 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-47bcf" podStartSLOduration=1.6361021980000001 podStartE2EDuration="1.636102198s" podCreationTimestamp="2026-03-14 00:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:19.623464769 +0000 UTC m=+7.259662055" watchObservedRunningTime="2026-03-14 00:23:19.636102198 +0000 UTC m=+7.272299485" Mar 14 00:23:20.499316 update_engine[1953]: I20260314 00:23:20.499243 1953 update_attempter.cc:509] Updating boot flags... Mar 14 00:23:20.569624 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3499) Mar 14 00:23:20.821856 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3389) Mar 14 00:23:20.881915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151078058.mount: Deactivated successfully. Mar 14 00:23:22.868859 containerd[1977]: time="2026-03-14T00:23:22.868787170Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.871530 containerd[1977]: time="2026-03-14T00:23:22.871314245Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:23:22.874703 containerd[1977]: time="2026-03-14T00:23:22.874507714Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.879778 containerd[1977]: time="2026-03-14T00:23:22.879731390Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.880692 containerd[1977]: time="2026-03-14T00:23:22.880646020Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.348536992s" Mar 14 00:23:22.880692 containerd[1977]: time="2026-03-14T00:23:22.880695283Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:23:22.887793 containerd[1977]: time="2026-03-14T00:23:22.887750356Z" level=info msg="CreateContainer within sandbox \"e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:23:22.907015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763815964.mount: Deactivated successfully. Mar 14 00:23:22.909653 containerd[1977]: time="2026-03-14T00:23:22.909609280Z" level=info msg="CreateContainer within sandbox \"e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5\"" Mar 14 00:23:22.910377 containerd[1977]: time="2026-03-14T00:23:22.910306194Z" level=info msg="StartContainer for \"d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5\"" Mar 14 00:23:22.950036 systemd[1]: Started cri-containerd-d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5.scope - libcontainer container d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5. Mar 14 00:23:22.980062 containerd[1977]: time="2026-03-14T00:23:22.979918839Z" level=info msg="StartContainer for \"d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5\" returns successfully" Mar 14 00:23:30.060982 sudo[2292]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:30.144473 sshd[2289]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:30.152035 systemd[1]: sshd@6-172.31.20.55:22-68.220.241.50:38870.service: Deactivated successfully. Mar 14 00:23:30.156777 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:23:30.159954 systemd[1]: session-7.scope: Consumed 6.138s CPU time, 150.4M memory peak, 0B memory swap peak. Mar 14 00:23:30.161000 systemd-logind[1952]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:23:30.162788 systemd-logind[1952]: Removed session 7. Mar 14 00:23:31.224512 kubelet[3185]: I0314 00:23:31.224449 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-b99s8" podStartSLOduration=8.873485456000001 podStartE2EDuration="12.224428879s" podCreationTimestamp="2026-03-14 00:23:19 +0000 UTC" firstStartedPulling="2026-03-14 00:23:19.530786899 +0000 UTC m=+7.166984176" lastFinishedPulling="2026-03-14 00:23:22.881730326 +0000 UTC m=+10.517927599" observedRunningTime="2026-03-14 00:23:23.622574764 +0000 UTC m=+11.258772051" watchObservedRunningTime="2026-03-14 00:23:31.224428879 +0000 UTC m=+18.860626168" Mar 14 00:23:31.238911 systemd[1]: Created slice kubepods-besteffort-podbdf5f6ae_5952_40a0_bb46_44a525a6c67c.slice - libcontainer container kubepods-besteffort-podbdf5f6ae_5952_40a0_bb46_44a525a6c67c.slice. Mar 14 00:23:31.254683 kubelet[3185]: I0314 00:23:31.252322 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf5f6ae-5952-40a0-bb46-44a525a6c67c-tigera-ca-bundle\") pod \"calico-typha-74bbd89c8-g4dvr\" (UID: \"bdf5f6ae-5952-40a0-bb46-44a525a6c67c\") " pod="calico-system/calico-typha-74bbd89c8-g4dvr" Mar 14 00:23:31.254683 kubelet[3185]: I0314 00:23:31.252386 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmm2\" (UniqueName: \"kubernetes.io/projected/bdf5f6ae-5952-40a0-bb46-44a525a6c67c-kube-api-access-kxmm2\") pod \"calico-typha-74bbd89c8-g4dvr\" (UID: \"bdf5f6ae-5952-40a0-bb46-44a525a6c67c\") " pod="calico-system/calico-typha-74bbd89c8-g4dvr" Mar 14 00:23:31.254683 kubelet[3185]: I0314 00:23:31.252433 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bdf5f6ae-5952-40a0-bb46-44a525a6c67c-typha-certs\") pod \"calico-typha-74bbd89c8-g4dvr\" (UID: \"bdf5f6ae-5952-40a0-bb46-44a525a6c67c\") " pod="calico-system/calico-typha-74bbd89c8-g4dvr" Mar 14 00:23:31.407979 systemd[1]: Created slice kubepods-besteffort-poddadc90c7_de0e_436b_b91f_1637f8778779.slice - libcontainer container kubepods-besteffort-poddadc90c7_de0e_436b_b91f_1637f8778779.slice. Mar 14 00:23:31.453527 kubelet[3185]: I0314 00:23:31.453470 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-cni-log-dir\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453527 kubelet[3185]: I0314 00:23:31.453521 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-lib-modules\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453734 kubelet[3185]: I0314 00:23:31.453543 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dadc90c7-de0e-436b-b91f-1637f8778779-node-certs\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453734 kubelet[3185]: I0314 00:23:31.453565 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-policysync\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453734 kubelet[3185]: I0314 00:23:31.453587 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-var-run-calico\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453734 kubelet[3185]: I0314 00:23:31.453609 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-flexvol-driver-host\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453734 kubelet[3185]: I0314 00:23:31.453631 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-var-lib-calico\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453990 kubelet[3185]: I0314 00:23:31.453652 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-sys-fs\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453990 kubelet[3185]: I0314 00:23:31.453677 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-cni-bin-dir\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453990 kubelet[3185]: I0314 00:23:31.453699 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dadc90c7-de0e-436b-b91f-1637f8778779-tigera-ca-bundle\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453990 kubelet[3185]: I0314 00:23:31.453726 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-cni-net-dir\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.453990 kubelet[3185]: I0314 00:23:31.453746 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-nodeproc\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.454204 kubelet[3185]: I0314 00:23:31.453769 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-xtables-lock\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.454204 kubelet[3185]: I0314 00:23:31.453795 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/dadc90c7-de0e-436b-b91f-1637f8778779-bpffs\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.454204 kubelet[3185]: I0314 00:23:31.453925 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sjv\" (UniqueName: \"kubernetes.io/projected/dadc90c7-de0e-436b-b91f-1637f8778779-kube-api-access-67sjv\") pod \"calico-node-tmgwz\" (UID: \"dadc90c7-de0e-436b-b91f-1637f8778779\") " pod="calico-system/calico-node-tmgwz" Mar 14 00:23:31.479037 kubelet[3185]: E0314 00:23:31.478923 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:31.544759 containerd[1977]: time="2026-03-14T00:23:31.544715237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74bbd89c8-g4dvr,Uid:bdf5f6ae-5952-40a0-bb46-44a525a6c67c,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:31.555023 kubelet[3185]: I0314 00:23:31.554974 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8fcc26c0-21bc-4ace-9bc8-3087de8102bb-varrun\") pod \"csi-node-driver-q5zzm\" (UID: \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\") " pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:31.555023 kubelet[3185]: I0314 00:23:31.555017 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4nxc\" (UniqueName: \"kubernetes.io/projected/8fcc26c0-21bc-4ace-9bc8-3087de8102bb-kube-api-access-l4nxc\") pod \"csi-node-driver-q5zzm\" (UID: \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\") " pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:31.555217 kubelet[3185]: I0314 00:23:31.555071 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcc26c0-21bc-4ace-9bc8-3087de8102bb-kubelet-dir\") pod \"csi-node-driver-q5zzm\" (UID: \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\") " pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:31.555217 kubelet[3185]: I0314 00:23:31.555185 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8fcc26c0-21bc-4ace-9bc8-3087de8102bb-registration-dir\") pod \"csi-node-driver-q5zzm\" (UID: \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\") " pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:31.555217 kubelet[3185]: I0314 00:23:31.555211 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8fcc26c0-21bc-4ace-9bc8-3087de8102bb-socket-dir\") pod \"csi-node-driver-q5zzm\" (UID: \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\") " pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:31.574400 kubelet[3185]: E0314 00:23:31.572717 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.574400 kubelet[3185]: W0314 00:23:31.572745 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.574400 kubelet[3185]: E0314 00:23:31.572773 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.575748 kubelet[3185]: E0314 00:23:31.575726 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.575932 kubelet[3185]: W0314 00:23:31.575913 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.576030 kubelet[3185]: E0314 00:23:31.576017 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.576404 kubelet[3185]: E0314 00:23:31.576390 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.576513 kubelet[3185]: W0314 00:23:31.576499 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.576596 kubelet[3185]: E0314 00:23:31.576584 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.640190 containerd[1977]: time="2026-03-14T00:23:31.639869339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:31.640190 containerd[1977]: time="2026-03-14T00:23:31.639952659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:31.640190 containerd[1977]: time="2026-03-14T00:23:31.639975491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:31.640190 containerd[1977]: time="2026-03-14T00:23:31.640094182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:31.657370 kubelet[3185]: E0314 00:23:31.657342 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.657370 kubelet[3185]: W0314 00:23:31.657367 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.657543 kubelet[3185]: E0314 00:23:31.657393 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.657799 kubelet[3185]: E0314 00:23:31.657777 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.657891 kubelet[3185]: W0314 00:23:31.657798 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.657891 kubelet[3185]: E0314 00:23:31.657830 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.658217 kubelet[3185]: E0314 00:23:31.658124 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.658217 kubelet[3185]: W0314 00:23:31.658139 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.658217 kubelet[3185]: E0314 00:23:31.658151 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.658460 kubelet[3185]: E0314 00:23:31.658431 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.658460 kubelet[3185]: W0314 00:23:31.658447 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.658578 kubelet[3185]: E0314 00:23:31.658460 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.658842 kubelet[3185]: E0314 00:23:31.658826 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.658912 kubelet[3185]: W0314 00:23:31.658843 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.658912 kubelet[3185]: E0314 00:23:31.658856 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.659395 kubelet[3185]: E0314 00:23:31.659170 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.659395 kubelet[3185]: W0314 00:23:31.659182 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.659395 kubelet[3185]: E0314 00:23:31.659195 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.659694 kubelet[3185]: E0314 00:23:31.659678 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.659905 kubelet[3185]: W0314 00:23:31.659695 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.659905 kubelet[3185]: E0314 00:23:31.659709 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.660110 kubelet[3185]: E0314 00:23:31.660002 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.660110 kubelet[3185]: W0314 00:23:31.660016 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.660110 kubelet[3185]: E0314 00:23:31.660029 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.661831 kubelet[3185]: E0314 00:23:31.660487 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.661831 kubelet[3185]: W0314 00:23:31.660500 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.661831 kubelet[3185]: E0314 00:23:31.660513 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.662210 kubelet[3185]: E0314 00:23:31.662188 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.662210 kubelet[3185]: W0314 00:23:31.662209 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.662322 kubelet[3185]: E0314 00:23:31.662224 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.662532 kubelet[3185]: E0314 00:23:31.662516 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.662592 kubelet[3185]: W0314 00:23:31.662532 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.662592 kubelet[3185]: E0314 00:23:31.662546 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.664469 kubelet[3185]: E0314 00:23:31.664445 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.664469 kubelet[3185]: W0314 00:23:31.664464 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.664613 kubelet[3185]: E0314 00:23:31.664479 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.664723 kubelet[3185]: E0314 00:23:31.664703 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.664723 kubelet[3185]: W0314 00:23:31.664718 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.664870 kubelet[3185]: E0314 00:23:31.664731 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.665069 kubelet[3185]: E0314 00:23:31.664974 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.665069 kubelet[3185]: W0314 00:23:31.664985 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.665069 kubelet[3185]: E0314 00:23:31.664995 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.665250 kubelet[3185]: E0314 00:23:31.665241 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.665294 kubelet[3185]: W0314 00:23:31.665287 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.665336 kubelet[3185]: E0314 00:23:31.665329 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.665686 kubelet[3185]: E0314 00:23:31.665554 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.665686 kubelet[3185]: W0314 00:23:31.665564 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.665686 kubelet[3185]: E0314 00:23:31.665572 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.665894 kubelet[3185]: E0314 00:23:31.665873 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.665894 kubelet[3185]: W0314 00:23:31.665888 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.665987 kubelet[3185]: E0314 00:23:31.665902 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.666250 kubelet[3185]: E0314 00:23:31.666117 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.666250 kubelet[3185]: W0314 00:23:31.666127 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.666250 kubelet[3185]: E0314 00:23:31.666136 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.666477 kubelet[3185]: E0314 00:23:31.666458 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.666477 kubelet[3185]: W0314 00:23:31.666472 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.666568 kubelet[3185]: E0314 00:23:31.666486 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.666788 kubelet[3185]: E0314 00:23:31.666754 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.666788 kubelet[3185]: W0314 00:23:31.666784 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.666907 kubelet[3185]: E0314 00:23:31.666798 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.668072 kubelet[3185]: E0314 00:23:31.667959 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.668072 kubelet[3185]: W0314 00:23:31.667994 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.668072 kubelet[3185]: E0314 00:23:31.668009 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.668911 kubelet[3185]: E0314 00:23:31.668744 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.668911 kubelet[3185]: W0314 00:23:31.668756 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.668911 kubelet[3185]: E0314 00:23:31.668766 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.669126 systemd[1]: Started cri-containerd-ddeb67203068d9b77339a62abc5999f2742a990178c4e2dfdc97770af58fdd0c.scope - libcontainer container ddeb67203068d9b77339a62abc5999f2742a990178c4e2dfdc97770af58fdd0c. Mar 14 00:23:31.669581 kubelet[3185]: E0314 00:23:31.669560 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.669581 kubelet[3185]: W0314 00:23:31.669576 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.669694 kubelet[3185]: E0314 00:23:31.669590 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.671682 kubelet[3185]: E0314 00:23:31.671650 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.671682 kubelet[3185]: W0314 00:23:31.671668 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.672060 kubelet[3185]: E0314 00:23:31.671683 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.672479 kubelet[3185]: E0314 00:23:31.672459 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.672479 kubelet[3185]: W0314 00:23:31.672476 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.672594 kubelet[3185]: E0314 00:23:31.672491 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.685334 kubelet[3185]: E0314 00:23:31.685303 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:31.685334 kubelet[3185]: W0314 00:23:31.685332 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:31.685540 kubelet[3185]: E0314 00:23:31.685354 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:31.721068 containerd[1977]: time="2026-03-14T00:23:31.720866896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tmgwz,Uid:dadc90c7-de0e-436b-b91f-1637f8778779,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:31.726577 containerd[1977]: time="2026-03-14T00:23:31.726538169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74bbd89c8-g4dvr,Uid:bdf5f6ae-5952-40a0-bb46-44a525a6c67c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddeb67203068d9b77339a62abc5999f2742a990178c4e2dfdc97770af58fdd0c\"" Mar 14 00:23:31.729121 containerd[1977]: time="2026-03-14T00:23:31.729018316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:23:31.755064 containerd[1977]: time="2026-03-14T00:23:31.754943261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:31.755064 containerd[1977]: time="2026-03-14T00:23:31.755013086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:31.755064 containerd[1977]: time="2026-03-14T00:23:31.755033977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:31.755362 containerd[1977]: time="2026-03-14T00:23:31.755148221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:31.782037 systemd[1]: Started cri-containerd-2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e.scope - libcontainer container 2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e. Mar 14 00:23:31.810345 containerd[1977]: time="2026-03-14T00:23:31.810312602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tmgwz,Uid:dadc90c7-de0e-436b-b91f-1637f8778779,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\"" Mar 14 00:23:33.118017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331366162.mount: Deactivated successfully. Mar 14 00:23:33.555581 kubelet[3185]: E0314 00:23:33.555461 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:34.734644 containerd[1977]: time="2026-03-14T00:23:34.734592971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:34.735824 containerd[1977]: time="2026-03-14T00:23:34.735757305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:23:34.736747 containerd[1977]: time="2026-03-14T00:23:34.736677236Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:34.739606 containerd[1977]: time="2026-03-14T00:23:34.739342437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:34.755427 containerd[1977]: time="2026-03-14T00:23:34.755256012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.026188698s" Mar 14 00:23:34.755427 containerd[1977]: time="2026-03-14T00:23:34.755308110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:23:34.758179 containerd[1977]: time="2026-03-14T00:23:34.757945164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:23:34.777684 containerd[1977]: time="2026-03-14T00:23:34.777643565Z" level=info msg="CreateContainer within sandbox \"ddeb67203068d9b77339a62abc5999f2742a990178c4e2dfdc97770af58fdd0c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:23:34.801779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192570085.mount: Deactivated successfully. Mar 14 00:23:34.811053 containerd[1977]: time="2026-03-14T00:23:34.810865651Z" level=info msg="CreateContainer within sandbox \"ddeb67203068d9b77339a62abc5999f2742a990178c4e2dfdc97770af58fdd0c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c966b91db4c36fafde0fbfa43ee2cfb081b9af3da4f08172fd90544ed2f2de3\"" Mar 14 00:23:34.812482 containerd[1977]: time="2026-03-14T00:23:34.812456617Z" level=info msg="StartContainer for \"1c966b91db4c36fafde0fbfa43ee2cfb081b9af3da4f08172fd90544ed2f2de3\"" Mar 14 00:23:34.896041 systemd[1]: Started cri-containerd-1c966b91db4c36fafde0fbfa43ee2cfb081b9af3da4f08172fd90544ed2f2de3.scope - libcontainer container 1c966b91db4c36fafde0fbfa43ee2cfb081b9af3da4f08172fd90544ed2f2de3. Mar 14 00:23:34.952165 containerd[1977]: time="2026-03-14T00:23:34.952099656Z" level=info msg="StartContainer for \"1c966b91db4c36fafde0fbfa43ee2cfb081b9af3da4f08172fd90544ed2f2de3\" returns successfully" Mar 14 00:23:35.555065 kubelet[3185]: E0314 00:23:35.555013 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:35.664008 kubelet[3185]: E0314 00:23:35.663659 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.664008 kubelet[3185]: W0314 00:23:35.663701 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.664008 kubelet[3185]: E0314 00:23:35.663727 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.664298 kubelet[3185]: E0314 00:23:35.664061 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.664298 kubelet[3185]: W0314 00:23:35.664106 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.664298 kubelet[3185]: E0314 00:23:35.664120 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.664643 kubelet[3185]: E0314 00:23:35.664622 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.664726 kubelet[3185]: W0314 00:23:35.664638 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.664726 kubelet[3185]: E0314 00:23:35.664677 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.665114 kubelet[3185]: E0314 00:23:35.665074 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.665114 kubelet[3185]: W0314 00:23:35.665090 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.665114 kubelet[3185]: E0314 00:23:35.665105 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.665399 kubelet[3185]: E0314 00:23:35.665373 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.665449 kubelet[3185]: W0314 00:23:35.665402 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.665449 kubelet[3185]: E0314 00:23:35.665417 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.665833 kubelet[3185]: E0314 00:23:35.665658 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.665833 kubelet[3185]: W0314 00:23:35.665671 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.665833 kubelet[3185]: E0314 00:23:35.665686 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.666230 kubelet[3185]: E0314 00:23:35.666207 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.666230 kubelet[3185]: W0314 00:23:35.666223 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.666381 kubelet[3185]: E0314 00:23:35.666238 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.666567 kubelet[3185]: E0314 00:23:35.666538 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.666567 kubelet[3185]: W0314 00:23:35.666555 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.666683 kubelet[3185]: E0314 00:23:35.666568 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.666885 kubelet[3185]: E0314 00:23:35.666865 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.666885 kubelet[3185]: W0314 00:23:35.666882 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.667040 kubelet[3185]: E0314 00:23:35.666896 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.667125 kubelet[3185]: E0314 00:23:35.667103 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.667125 kubelet[3185]: W0314 00:23:35.667118 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.667354 kubelet[3185]: E0314 00:23:35.667131 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.667354 kubelet[3185]: E0314 00:23:35.667348 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.667518 kubelet[3185]: W0314 00:23:35.667358 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.667518 kubelet[3185]: E0314 00:23:35.667371 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.667732 kubelet[3185]: E0314 00:23:35.667713 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.667732 kubelet[3185]: W0314 00:23:35.667727 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.667963 kubelet[3185]: E0314 00:23:35.667741 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.668203 kubelet[3185]: E0314 00:23:35.668185 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.668203 kubelet[3185]: W0314 00:23:35.668200 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.668316 kubelet[3185]: E0314 00:23:35.668214 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.668658 kubelet[3185]: E0314 00:23:35.668633 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.668658 kubelet[3185]: W0314 00:23:35.668657 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.668993 kubelet[3185]: E0314 00:23:35.668671 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.668993 kubelet[3185]: E0314 00:23:35.668951 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.668993 kubelet[3185]: W0314 00:23:35.668985 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.669167 kubelet[3185]: E0314 00:23:35.669000 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.693459 kubelet[3185]: E0314 00:23:35.693425 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.693633 kubelet[3185]: W0314 00:23:35.693449 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.693633 kubelet[3185]: E0314 00:23:35.693492 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.693874 kubelet[3185]: E0314 00:23:35.693850 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.693874 kubelet[3185]: W0314 00:23:35.693869 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.694020 kubelet[3185]: E0314 00:23:35.693885 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.694230 kubelet[3185]: E0314 00:23:35.694210 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.694230 kubelet[3185]: W0314 00:23:35.694226 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.694350 kubelet[3185]: E0314 00:23:35.694240 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.694642 kubelet[3185]: E0314 00:23:35.694623 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.694642 kubelet[3185]: W0314 00:23:35.694638 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.694768 kubelet[3185]: E0314 00:23:35.694652 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.694941 kubelet[3185]: E0314 00:23:35.694923 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.694941 kubelet[3185]: W0314 00:23:35.694937 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.695070 kubelet[3185]: E0314 00:23:35.694953 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.695277 kubelet[3185]: E0314 00:23:35.695257 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.695277 kubelet[3185]: W0314 00:23:35.695274 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.695398 kubelet[3185]: E0314 00:23:35.695288 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.695727 kubelet[3185]: E0314 00:23:35.695708 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.695727 kubelet[3185]: W0314 00:23:35.695723 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.695908 kubelet[3185]: E0314 00:23:35.695737 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.696019 kubelet[3185]: E0314 00:23:35.696002 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.696072 kubelet[3185]: W0314 00:23:35.696025 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.696072 kubelet[3185]: E0314 00:23:35.696040 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.696309 kubelet[3185]: E0314 00:23:35.696292 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.696309 kubelet[3185]: W0314 00:23:35.696305 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.696416 kubelet[3185]: E0314 00:23:35.696319 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.696754 kubelet[3185]: E0314 00:23:35.696736 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.696754 kubelet[3185]: W0314 00:23:35.696750 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.696914 kubelet[3185]: E0314 00:23:35.696763 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.697050 kubelet[3185]: E0314 00:23:35.697034 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.697050 kubelet[3185]: W0314 00:23:35.697047 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.697050 kubelet[3185]: E0314 00:23:35.697060 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.697319 kubelet[3185]: E0314 00:23:35.697303 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.697319 kubelet[3185]: W0314 00:23:35.697317 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.697468 kubelet[3185]: E0314 00:23:35.697330 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.697544 kubelet[3185]: E0314 00:23:35.697536 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.697597 kubelet[3185]: W0314 00:23:35.697546 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.697597 kubelet[3185]: E0314 00:23:35.697559 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.697838 kubelet[3185]: E0314 00:23:35.697825 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.697895 kubelet[3185]: W0314 00:23:35.697838 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.697895 kubelet[3185]: E0314 00:23:35.697853 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.698434 kubelet[3185]: E0314 00:23:35.698417 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.698434 kubelet[3185]: W0314 00:23:35.698433 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.698555 kubelet[3185]: E0314 00:23:35.698446 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.698704 kubelet[3185]: E0314 00:23:35.698686 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.698704 kubelet[3185]: W0314 00:23:35.698700 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.698826 kubelet[3185]: E0314 00:23:35.698714 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.699022 kubelet[3185]: E0314 00:23:35.699004 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.699022 kubelet[3185]: W0314 00:23:35.699018 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.699134 kubelet[3185]: E0314 00:23:35.699032 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:35.699400 kubelet[3185]: E0314 00:23:35.699383 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:23:35.699400 kubelet[3185]: W0314 00:23:35.699396 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:23:35.699600 kubelet[3185]: E0314 00:23:35.699409 3185 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:23:36.249745 containerd[1977]: time="2026-03-14T00:23:36.249697134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:36.250820 containerd[1977]: time="2026-03-14T00:23:36.250690908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:23:36.251879 containerd[1977]: time="2026-03-14T00:23:36.251841774Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:36.254293 containerd[1977]: time="2026-03-14T00:23:36.254238726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:36.255989 containerd[1977]: time="2026-03-14T00:23:36.255055345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.497068486s" Mar 14 00:23:36.255989 containerd[1977]: time="2026-03-14T00:23:36.255098335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:23:36.260243 containerd[1977]: time="2026-03-14T00:23:36.260210373Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:23:36.278907 containerd[1977]: time="2026-03-14T00:23:36.278863617Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292\"" Mar 14 00:23:36.280953 containerd[1977]: time="2026-03-14T00:23:36.279689788Z" level=info msg="StartContainer for \"4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292\"" Mar 14 00:23:36.325070 systemd[1]: Started cri-containerd-4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292.scope - libcontainer container 4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292. Mar 14 00:23:36.356193 containerd[1977]: time="2026-03-14T00:23:36.356147782Z" level=info msg="StartContainer for \"4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292\" returns successfully" Mar 14 00:23:36.371427 systemd[1]: cri-containerd-4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292.scope: Deactivated successfully. Mar 14 00:23:36.660162 kubelet[3185]: I0314 00:23:36.660136 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:23:36.681837 kubelet[3185]: I0314 00:23:36.680042 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74bbd89c8-g4dvr" podStartSLOduration=2.651870385 podStartE2EDuration="5.680022056s" podCreationTimestamp="2026-03-14 00:23:31 +0000 UTC" firstStartedPulling="2026-03-14 00:23:31.728461867 +0000 UTC m=+19.364659130" lastFinishedPulling="2026-03-14 00:23:34.756613502 +0000 UTC m=+22.392810801" observedRunningTime="2026-03-14 00:23:35.66101999 +0000 UTC m=+23.297217280" watchObservedRunningTime="2026-03-14 00:23:36.680022056 +0000 UTC m=+24.316219341" Mar 14 00:23:36.765292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292-rootfs.mount: Deactivated successfully. Mar 14 00:23:37.204557 containerd[1977]: time="2026-03-14T00:23:37.174654872Z" level=info msg="shim disconnected" id=4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292 namespace=k8s.io Mar 14 00:23:37.205376 containerd[1977]: time="2026-03-14T00:23:37.204802407Z" level=warning msg="cleaning up after shim disconnected" id=4780d420c750a8fbc4ae3a6ca6f58722ac7049d40fb1678124b15c3c8635d292 namespace=k8s.io Mar 14 00:23:37.205376 containerd[1977]: time="2026-03-14T00:23:37.204845757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:37.555493 kubelet[3185]: E0314 00:23:37.555303 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:37.672063 containerd[1977]: time="2026-03-14T00:23:37.672018420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:23:39.556101 kubelet[3185]: E0314 00:23:39.556032 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:41.555619 kubelet[3185]: E0314 00:23:41.555547 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:43.556350 kubelet[3185]: E0314 00:23:43.555145 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:45.082921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460798609.mount: Deactivated successfully. Mar 14 00:23:45.141189 containerd[1977]: time="2026-03-14T00:23:45.133633889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:45.142397 containerd[1977]: time="2026-03-14T00:23:45.142333109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:23:45.144339 containerd[1977]: time="2026-03-14T00:23:45.144299037Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:45.146959 containerd[1977]: time="2026-03-14T00:23:45.146898006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:45.149043 containerd[1977]: time="2026-03-14T00:23:45.148370967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.476299364s" Mar 14 00:23:45.149043 containerd[1977]: time="2026-03-14T00:23:45.148414416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:23:45.154182 containerd[1977]: time="2026-03-14T00:23:45.154144679Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:23:45.185270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678906724.mount: Deactivated successfully. Mar 14 00:23:45.191796 containerd[1977]: time="2026-03-14T00:23:45.191748162Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa\"" Mar 14 00:23:45.193541 containerd[1977]: time="2026-03-14T00:23:45.193116875Z" level=info msg="StartContainer for \"f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa\"" Mar 14 00:23:45.242591 systemd[1]: Started cri-containerd-f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa.scope - libcontainer container f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa. Mar 14 00:23:45.293434 containerd[1977]: time="2026-03-14T00:23:45.293365301Z" level=info msg="StartContainer for \"f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa\" returns successfully" Mar 14 00:23:45.336395 systemd[1]: cri-containerd-f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa.scope: Deactivated successfully. Mar 14 00:23:45.555024 kubelet[3185]: E0314 00:23:45.554954 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:45.698541 containerd[1977]: time="2026-03-14T00:23:45.698283879Z" level=info msg="shim disconnected" id=f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa namespace=k8s.io Mar 14 00:23:45.698541 containerd[1977]: time="2026-03-14T00:23:45.698342701Z" level=warning msg="cleaning up after shim disconnected" id=f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa namespace=k8s.io Mar 14 00:23:45.698541 containerd[1977]: time="2026-03-14T00:23:45.698358801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:46.084439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f706367562ef9f9da5889dc00f8cd2b488ca242cef69957f2355fbfb8bc01daa-rootfs.mount: Deactivated successfully. Mar 14 00:23:46.697980 containerd[1977]: time="2026-03-14T00:23:46.697696141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:23:47.555509 kubelet[3185]: E0314 00:23:47.555336 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:49.555751 kubelet[3185]: E0314 00:23:49.555697 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:50.868186 containerd[1977]: time="2026-03-14T00:23:50.868138477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:50.869388 containerd[1977]: time="2026-03-14T00:23:50.869341257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:23:50.870507 containerd[1977]: time="2026-03-14T00:23:50.870246338Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:50.872950 containerd[1977]: time="2026-03-14T00:23:50.872887174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:50.873821 containerd[1977]: time="2026-03-14T00:23:50.873770302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.176029382s" Mar 14 00:23:50.873908 containerd[1977]: time="2026-03-14T00:23:50.873832008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:23:50.880111 containerd[1977]: time="2026-03-14T00:23:50.879895105Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:23:50.898540 containerd[1977]: time="2026-03-14T00:23:50.898486058Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f\"" Mar 14 00:23:50.899923 containerd[1977]: time="2026-03-14T00:23:50.899564322Z" level=info msg="StartContainer for \"5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f\"" Mar 14 00:23:50.942015 systemd[1]: Started cri-containerd-5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f.scope - libcontainer container 5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f. Mar 14 00:23:50.974691 containerd[1977]: time="2026-03-14T00:23:50.974644812Z" level=info msg="StartContainer for \"5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f\" returns successfully" Mar 14 00:23:51.557826 kubelet[3185]: E0314 00:23:51.557333 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:52.102481 systemd[1]: cri-containerd-5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f.scope: Deactivated successfully. Mar 14 00:23:52.133054 kubelet[3185]: I0314 00:23:52.131222 3185 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:23:52.151665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f-rootfs.mount: Deactivated successfully. Mar 14 00:23:52.161240 containerd[1977]: time="2026-03-14T00:23:52.161060411Z" level=info msg="shim disconnected" id=5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f namespace=k8s.io Mar 14 00:23:52.161943 containerd[1977]: time="2026-03-14T00:23:52.161241652Z" level=warning msg="cleaning up after shim disconnected" id=5f49d93a76a000270e8ff8e231569fa448ee05c25a7ad5d9478026b05a20194f namespace=k8s.io Mar 14 00:23:52.161943 containerd[1977]: time="2026-03-14T00:23:52.161258586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:52.221633 systemd[1]: Created slice kubepods-besteffort-pod2191c208_e075_4e83_8a69_75c0e78faf73.slice - libcontainer container kubepods-besteffort-pod2191c208_e075_4e83_8a69_75c0e78faf73.slice. Mar 14 00:23:52.234344 systemd[1]: Created slice kubepods-burstable-pod0736b4c3_59d4_4880_ac34_375e0fee379d.slice - libcontainer container kubepods-burstable-pod0736b4c3_59d4_4880_ac34_375e0fee379d.slice. Mar 14 00:23:52.247863 systemd[1]: Created slice kubepods-besteffort-podf4d4bc3f_388e_42d8_aa7a_f80349da127e.slice - libcontainer container kubepods-besteffort-podf4d4bc3f_388e_42d8_aa7a_f80349da127e.slice. Mar 14 00:23:52.263671 systemd[1]: Created slice kubepods-besteffort-pod5d5616e1_d343_4d2f_a167_fc5c7ebcfeec.slice - libcontainer container kubepods-besteffort-pod5d5616e1_d343_4d2f_a167_fc5c7ebcfeec.slice. Mar 14 00:23:52.271624 systemd[1]: Created slice kubepods-besteffort-pod9d618a51_6829_430e_b49a_66d79e5d6bd9.slice - libcontainer container kubepods-besteffort-pod9d618a51_6829_430e_b49a_66d79e5d6bd9.slice. Mar 14 00:23:52.287426 systemd[1]: Created slice kubepods-besteffort-pod29b0b1cc_979c_471a_96de_036f619bea96.slice - libcontainer container kubepods-besteffort-pod29b0b1cc_979c_471a_96de_036f619bea96.slice. Mar 14 00:23:52.308865 systemd[1]: Created slice kubepods-burstable-pod599d0c23_2c01_40d7_91a9_4eddcc457e9d.slice - libcontainer container kubepods-burstable-pod599d0c23_2c01_40d7_91a9_4eddcc457e9d.slice. Mar 14 00:23:52.314463 kubelet[3185]: I0314 00:23:52.314201 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d618a51-6829-430e-b49a-66d79e5d6bd9-calico-apiserver-certs\") pod \"calico-apiserver-84dcd48bcb-8rmqt\" (UID: \"9d618a51-6829-430e-b49a-66d79e5d6bd9\") " pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" Mar 14 00:23:52.315914 kubelet[3185]: I0314 00:23:52.315880 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfxx\" (UniqueName: \"kubernetes.io/projected/9d618a51-6829-430e-b49a-66d79e5d6bd9-kube-api-access-hrfxx\") pod \"calico-apiserver-84dcd48bcb-8rmqt\" (UID: \"9d618a51-6829-430e-b49a-66d79e5d6bd9\") " pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" Mar 14 00:23:52.316079 kubelet[3185]: I0314 00:23:52.316060 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0736b4c3-59d4-4880-ac34-375e0fee379d-config-volume\") pod \"coredns-66bc5c9577-qz4q6\" (UID: \"0736b4c3-59d4-4880-ac34-375e0fee379d\") " pod="kube-system/coredns-66bc5c9577-qz4q6" Mar 14 00:23:52.316533 kubelet[3185]: I0314 00:23:52.316183 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-backend-key-pair\") pod \"whisker-645f7ff588-6rtfv\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:52.316533 kubelet[3185]: I0314 00:23:52.316214 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-ca-bundle\") pod \"whisker-645f7ff588-6rtfv\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:52.316533 kubelet[3185]: I0314 00:23:52.316255 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfrq\" (UniqueName: \"kubernetes.io/projected/f4d4bc3f-388e-42d8-aa7a-f80349da127e-kube-api-access-zkfrq\") pod \"goldmane-cccfbd5cf-tmhdn\" (UID: \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\") " pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:52.316533 kubelet[3185]: I0314 00:23:52.316280 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmgrt\" (UniqueName: \"kubernetes.io/projected/5d5616e1-d343-4d2f-a167-fc5c7ebcfeec-kube-api-access-hmgrt\") pod \"calico-kube-controllers-d8b9cffb-kn7jv\" (UID: \"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec\") " pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" Mar 14 00:23:52.316533 kubelet[3185]: I0314 00:23:52.316310 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d4bc3f-388e-42d8-aa7a-f80349da127e-config\") pod \"goldmane-cccfbd5cf-tmhdn\" (UID: \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\") " pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:52.316923 kubelet[3185]: I0314 00:23:52.316331 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d5616e1-d343-4d2f-a167-fc5c7ebcfeec-tigera-ca-bundle\") pod \"calico-kube-controllers-d8b9cffb-kn7jv\" (UID: \"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec\") " pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" Mar 14 00:23:52.316923 kubelet[3185]: I0314 00:23:52.316357 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgdt\" (UniqueName: \"kubernetes.io/projected/0736b4c3-59d4-4880-ac34-375e0fee379d-kube-api-access-fkgdt\") pod \"coredns-66bc5c9577-qz4q6\" (UID: \"0736b4c3-59d4-4880-ac34-375e0fee379d\") " pod="kube-system/coredns-66bc5c9577-qz4q6" Mar 14 00:23:52.316923 kubelet[3185]: I0314 00:23:52.316384 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-nginx-config\") pod \"whisker-645f7ff588-6rtfv\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:52.316923 kubelet[3185]: I0314 00:23:52.316405 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdstm\" (UniqueName: \"kubernetes.io/projected/2191c208-e075-4e83-8a69-75c0e78faf73-kube-api-access-fdstm\") pod \"whisker-645f7ff588-6rtfv\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:52.316923 kubelet[3185]: I0314 00:23:52.316428 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29b0b1cc-979c-471a-96de-036f619bea96-calico-apiserver-certs\") pod \"calico-apiserver-84dcd48bcb-ntpm7\" (UID: \"29b0b1cc-979c-471a-96de-036f619bea96\") " pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" Mar 14 00:23:52.318090 kubelet[3185]: I0314 00:23:52.316451 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzlx\" (UniqueName: \"kubernetes.io/projected/29b0b1cc-979c-471a-96de-036f619bea96-kube-api-access-dhzlx\") pod \"calico-apiserver-84dcd48bcb-ntpm7\" (UID: \"29b0b1cc-979c-471a-96de-036f619bea96\") " pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" Mar 14 00:23:52.318090 kubelet[3185]: I0314 00:23:52.316749 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d4bc3f-388e-42d8-aa7a-f80349da127e-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-tmhdn\" (UID: \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\") " pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:52.318090 kubelet[3185]: I0314 00:23:52.316789 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f4d4bc3f-388e-42d8-aa7a-f80349da127e-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-tmhdn\" (UID: \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\") " pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:52.318090 kubelet[3185]: I0314 00:23:52.316847 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xml6\" (UniqueName: \"kubernetes.io/projected/599d0c23-2c01-40d7-91a9-4eddcc457e9d-kube-api-access-5xml6\") pod \"coredns-66bc5c9577-n76gs\" (UID: \"599d0c23-2c01-40d7-91a9-4eddcc457e9d\") " pod="kube-system/coredns-66bc5c9577-n76gs" Mar 14 00:23:52.318090 kubelet[3185]: I0314 00:23:52.316917 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/599d0c23-2c01-40d7-91a9-4eddcc457e9d-config-volume\") pod \"coredns-66bc5c9577-n76gs\" (UID: \"599d0c23-2c01-40d7-91a9-4eddcc457e9d\") " pod="kube-system/coredns-66bc5c9577-n76gs" Mar 14 00:23:52.535417 containerd[1977]: time="2026-03-14T00:23:52.535296532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645f7ff588-6rtfv,Uid:2191c208-e075-4e83-8a69-75c0e78faf73,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:52.544694 containerd[1977]: time="2026-03-14T00:23:52.544652190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qz4q6,Uid:0736b4c3-59d4-4880-ac34-375e0fee379d,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:52.572367 containerd[1977]: time="2026-03-14T00:23:52.571894990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8b9cffb-kn7jv,Uid:5d5616e1-d343-4d2f-a167-fc5c7ebcfeec,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:52.579303 containerd[1977]: time="2026-03-14T00:23:52.579257451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tmhdn,Uid:f4d4bc3f-388e-42d8-aa7a-f80349da127e,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:52.588615 containerd[1977]: time="2026-03-14T00:23:52.588194102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-8rmqt,Uid:9d618a51-6829-430e-b49a-66d79e5d6bd9,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:52.618840 containerd[1977]: time="2026-03-14T00:23:52.618771324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-ntpm7,Uid:29b0b1cc-979c-471a-96de-036f619bea96,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:52.630898 containerd[1977]: time="2026-03-14T00:23:52.630465459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n76gs,Uid:599d0c23-2c01-40d7-91a9-4eddcc457e9d,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:52.958528 containerd[1977]: time="2026-03-14T00:23:52.958483152Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:23:52.983330 containerd[1977]: time="2026-03-14T00:23:52.983267123Z" level=info msg="CreateContainer within sandbox \"2c7d98ddcb51e4f94b2bd31a86cae19170fbd89aa97de334acf6948abf4b3a5e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2\"" Mar 14 00:23:52.986454 containerd[1977]: time="2026-03-14T00:23:52.986332027Z" level=info msg="StartContainer for \"a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2\"" Mar 14 00:23:53.039217 systemd[1]: Started cri-containerd-a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2.scope - libcontainer container a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2. Mar 14 00:23:53.252650 containerd[1977]: time="2026-03-14T00:23:53.252378652Z" level=error msg="Failed to destroy network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.258124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319-shm.mount: Deactivated successfully. Mar 14 00:23:53.270878 containerd[1977]: time="2026-03-14T00:23:53.270801094Z" level=info msg="StartContainer for \"a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2\" returns successfully" Mar 14 00:23:53.291090 containerd[1977]: time="2026-03-14T00:23:53.291022474Z" level=error msg="encountered an error cleaning up failed sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.291243 containerd[1977]: time="2026-03-14T00:23:53.291153136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8b9cffb-kn7jv,Uid:5d5616e1-d343-4d2f-a167-fc5c7ebcfeec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.291520 kubelet[3185]: E0314 00:23:53.291472 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.292257 kubelet[3185]: E0314 00:23:53.291554 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" Mar 14 00:23:53.292257 kubelet[3185]: E0314 00:23:53.291579 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" Mar 14 00:23:53.292257 kubelet[3185]: E0314 00:23:53.291650 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d8b9cffb-kn7jv_calico-system(5d5616e1-d343-4d2f-a167-fc5c7ebcfeec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d8b9cffb-kn7jv_calico-system(5d5616e1-d343-4d2f-a167-fc5c7ebcfeec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" podUID="5d5616e1-d343-4d2f-a167-fc5c7ebcfeec" Mar 14 00:23:53.311077 containerd[1977]: time="2026-03-14T00:23:53.310917662Z" level=error msg="Failed to destroy network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.314013 containerd[1977]: time="2026-03-14T00:23:53.313070043Z" level=error msg="encountered an error cleaning up failed sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.314013 containerd[1977]: time="2026-03-14T00:23:53.313155885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n76gs,Uid:599d0c23-2c01-40d7-91a9-4eddcc457e9d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.314207 kubelet[3185]: E0314 00:23:53.313415 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.314207 kubelet[3185]: E0314 00:23:53.313472 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n76gs" Mar 14 00:23:53.314207 kubelet[3185]: E0314 00:23:53.313499 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n76gs" Mar 14 00:23:53.314367 kubelet[3185]: E0314 00:23:53.313557 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-n76gs_kube-system(599d0c23-2c01-40d7-91a9-4eddcc457e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-n76gs_kube-system(599d0c23-2c01-40d7-91a9-4eddcc457e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-n76gs" podUID="599d0c23-2c01-40d7-91a9-4eddcc457e9d" Mar 14 00:23:53.318347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd-shm.mount: Deactivated successfully. Mar 14 00:23:53.331778 containerd[1977]: time="2026-03-14T00:23:53.331725731Z" level=error msg="Failed to destroy network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.332457 containerd[1977]: time="2026-03-14T00:23:53.331766054Z" level=error msg="Failed to destroy network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.334830 containerd[1977]: time="2026-03-14T00:23:53.332979848Z" level=error msg="encountered an error cleaning up failed sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.335057 containerd[1977]: time="2026-03-14T00:23:53.335012847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645f7ff588-6rtfv,Uid:2191c208-e075-4e83-8a69-75c0e78faf73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.335230 containerd[1977]: time="2026-03-14T00:23:53.333058793Z" level=error msg="encountered an error cleaning up failed sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.335382 containerd[1977]: time="2026-03-14T00:23:53.335346966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-8rmqt,Uid:9d618a51-6829-430e-b49a-66d79e5d6bd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.335556 kubelet[3185]: E0314 00:23:53.335517 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.335622 kubelet[3185]: E0314 00:23:53.335586 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:53.335622 kubelet[3185]: E0314 00:23:53.335613 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645f7ff588-6rtfv" Mar 14 00:23:53.337028 kubelet[3185]: E0314 00:23:53.335671 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-645f7ff588-6rtfv_calico-system(2191c208-e075-4e83-8a69-75c0e78faf73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-645f7ff588-6rtfv_calico-system(2191c208-e075-4e83-8a69-75c0e78faf73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645f7ff588-6rtfv" podUID="2191c208-e075-4e83-8a69-75c0e78faf73" Mar 14 00:23:53.339688 kubelet[3185]: E0314 00:23:53.337908 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.339688 kubelet[3185]: E0314 00:23:53.337973 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" Mar 14 00:23:53.339688 kubelet[3185]: E0314 00:23:53.337998 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" Mar 14 00:23:53.339031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6-shm.mount: Deactivated successfully. Mar 14 00:23:53.373505 kubelet[3185]: E0314 00:23:53.338056 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84dcd48bcb-8rmqt_calico-system(9d618a51-6829-430e-b49a-66d79e5d6bd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84dcd48bcb-8rmqt_calico-system(9d618a51-6829-430e-b49a-66d79e5d6bd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" podUID="9d618a51-6829-430e-b49a-66d79e5d6bd9" Mar 14 00:23:53.373505 kubelet[3185]: E0314 00:23:53.357052 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373505 kubelet[3185]: E0314 00:23:53.357146 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.356077037Z" level=error msg="Failed to destroy network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.356551531Z" level=error msg="encountered an error cleaning up failed sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.356614651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tmhdn,Uid:f4d4bc3f-388e-42d8-aa7a-f80349da127e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.361335972Z" level=error msg="Failed to destroy network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.362254765Z" level=error msg="encountered an error cleaning up failed sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.362324094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-ntpm7,Uid:29b0b1cc-979c-471a-96de-036f619bea96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.368311226Z" level=error msg="Failed to destroy network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.368610393Z" level=error msg="encountered an error cleaning up failed sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.373692 containerd[1977]: time="2026-03-14T00:23:53.368659421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qz4q6,Uid:0736b4c3-59d4-4880-ac34-375e0fee379d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.339175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc-shm.mount: Deactivated successfully. Mar 14 00:23:53.374169 kubelet[3185]: E0314 00:23:53.357172 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tmhdn" Mar 14 00:23:53.374169 kubelet[3185]: E0314 00:23:53.358470 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-tmhdn_calico-system(f4d4bc3f-388e-42d8-aa7a-f80349da127e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-tmhdn_calico-system(f4d4bc3f-388e-42d8-aa7a-f80349da127e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tmhdn" podUID="f4d4bc3f-388e-42d8-aa7a-f80349da127e" Mar 14 00:23:53.374169 kubelet[3185]: E0314 00:23:53.363030 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.361997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae-shm.mount: Deactivated successfully. Mar 14 00:23:53.374461 kubelet[3185]: E0314 00:23:53.363080 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" Mar 14 00:23:53.374461 kubelet[3185]: E0314 00:23:53.363104 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" Mar 14 00:23:53.374461 kubelet[3185]: E0314 00:23:53.363158 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84dcd48bcb-ntpm7_calico-system(29b0b1cc-979c-471a-96de-036f619bea96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84dcd48bcb-ntpm7_calico-system(29b0b1cc-979c-471a-96de-036f619bea96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" podUID="29b0b1cc-979c-471a-96de-036f619bea96" Mar 14 00:23:53.374634 kubelet[3185]: E0314 00:23:53.368864 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.374634 kubelet[3185]: E0314 00:23:53.368907 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qz4q6" Mar 14 00:23:53.374634 kubelet[3185]: E0314 00:23:53.368929 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qz4q6" Mar 14 00:23:53.374786 kubelet[3185]: E0314 00:23:53.368975 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qz4q6_kube-system(0736b4c3-59d4-4880-ac34-375e0fee379d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qz4q6_kube-system(0736b4c3-59d4-4880-ac34-375e0fee379d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qz4q6" podUID="0736b4c3-59d4-4880-ac34-375e0fee379d" Mar 14 00:23:53.560585 systemd[1]: Created slice kubepods-besteffort-pod8fcc26c0_21bc_4ace_9bc8_3087de8102bb.slice - libcontainer container kubepods-besteffort-pod8fcc26c0_21bc_4ace_9bc8_3087de8102bb.slice. Mar 14 00:23:53.566094 containerd[1977]: time="2026-03-14T00:23:53.566057201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5zzm,Uid:8fcc26c0-21bc-4ace-9bc8-3087de8102bb,Namespace:calico-system,Attempt:0,}" Mar 14 00:23:53.630411 containerd[1977]: time="2026-03-14T00:23:53.630365169Z" level=error msg="Failed to destroy network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.630730 containerd[1977]: time="2026-03-14T00:23:53.630699487Z" level=error msg="encountered an error cleaning up failed sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.630872 containerd[1977]: time="2026-03-14T00:23:53.630760443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5zzm,Uid:8fcc26c0-21bc-4ace-9bc8-3087de8102bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.631025 kubelet[3185]: E0314 00:23:53.630991 3185 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:53.631097 kubelet[3185]: E0314 00:23:53.631045 3185 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:53.631097 kubelet[3185]: E0314 00:23:53.631076 3185 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q5zzm" Mar 14 00:23:53.631191 kubelet[3185]: E0314 00:23:53.631136 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q5zzm_calico-system(8fcc26c0-21bc-4ace-9bc8-3087de8102bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q5zzm_calico-system(8fcc26c0-21bc-4ace-9bc8-3087de8102bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:53.912583 kubelet[3185]: I0314 00:23:53.912554 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:23:53.925384 containerd[1977]: time="2026-03-14T00:23:53.925337950Z" level=info msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" Mar 14 00:23:53.928284 containerd[1977]: time="2026-03-14T00:23:53.928244080Z" level=info msg="Ensure that sandbox 11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae in task-service has been cleanup successfully" Mar 14 00:23:53.944713 kubelet[3185]: I0314 00:23:53.944681 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:23:53.946581 containerd[1977]: time="2026-03-14T00:23:53.946409393Z" level=info msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" Mar 14 00:23:53.950997 containerd[1977]: time="2026-03-14T00:23:53.950663382Z" level=info msg="Ensure that sandbox fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd in task-service has been cleanup successfully" Mar 14 00:23:53.960404 kubelet[3185]: I0314 00:23:53.959994 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:53.961911 containerd[1977]: time="2026-03-14T00:23:53.961798702Z" level=info msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" Mar 14 00:23:53.963354 containerd[1977]: time="2026-03-14T00:23:53.963305008Z" level=info msg="Ensure that sandbox 8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc in task-service has been cleanup successfully" Mar 14 00:23:53.964909 kubelet[3185]: I0314 00:23:53.964422 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tmgwz" podStartSLOduration=3.891138487 podStartE2EDuration="22.95429788s" podCreationTimestamp="2026-03-14 00:23:31 +0000 UTC" firstStartedPulling="2026-03-14 00:23:31.812091772 +0000 UTC m=+19.448289035" lastFinishedPulling="2026-03-14 00:23:50.875251163 +0000 UTC m=+38.511448428" observedRunningTime="2026-03-14 00:23:53.954284313 +0000 UTC m=+41.590481598" watchObservedRunningTime="2026-03-14 00:23:53.95429788 +0000 UTC m=+41.590495166" Mar 14 00:23:53.975676 kubelet[3185]: I0314 00:23:53.975646 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:23:53.980663 containerd[1977]: time="2026-03-14T00:23:53.980623239Z" level=info msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" Mar 14 00:23:53.980884 containerd[1977]: time="2026-03-14T00:23:53.980861326Z" level=info msg="Ensure that sandbox 14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6 in task-service has been cleanup successfully" Mar 14 00:23:54.012233 kubelet[3185]: I0314 00:23:54.011121 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:23:54.014927 containerd[1977]: time="2026-03-14T00:23:54.014148033Z" level=info msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" Mar 14 00:23:54.017083 containerd[1977]: time="2026-03-14T00:23:54.017042998Z" level=info msg="Ensure that sandbox 7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319 in task-service has been cleanup successfully" Mar 14 00:23:54.040835 kubelet[3185]: I0314 00:23:54.040789 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:23:54.046321 containerd[1977]: time="2026-03-14T00:23:54.046095044Z" level=info msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" Mar 14 00:23:54.047573 containerd[1977]: time="2026-03-14T00:23:54.047541708Z" level=info msg="Ensure that sandbox 868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13 in task-service has been cleanup successfully" Mar 14 00:23:54.065274 containerd[1977]: time="2026-03-14T00:23:54.064929219Z" level=error msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" failed" error="failed to destroy network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.066088 kubelet[3185]: E0314 00:23:54.066039 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:23:54.066302 kubelet[3185]: I0314 00:23:54.066273 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:23:54.067654 kubelet[3185]: E0314 00:23:54.067583 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae"} Mar 14 00:23:54.068230 kubelet[3185]: E0314 00:23:54.068128 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.068230 kubelet[3185]: E0314 00:23:54.068172 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4d4bc3f-388e-42d8-aa7a-f80349da127e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tmhdn" podUID="f4d4bc3f-388e-42d8-aa7a-f80349da127e" Mar 14 00:23:54.078884 containerd[1977]: time="2026-03-14T00:23:54.078702920Z" level=info msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" Mar 14 00:23:54.079020 containerd[1977]: time="2026-03-14T00:23:54.078940451Z" level=info msg="Ensure that sandbox 87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879 in task-service has been cleanup successfully" Mar 14 00:23:54.085163 kubelet[3185]: I0314 00:23:54.084463 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:23:54.086759 containerd[1977]: time="2026-03-14T00:23:54.086727845Z" level=info msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" Mar 14 00:23:54.087138 containerd[1977]: time="2026-03-14T00:23:54.087113934Z" level=info msg="Ensure that sandbox be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc in task-service has been cleanup successfully" Mar 14 00:23:54.142485 containerd[1977]: time="2026-03-14T00:23:54.142415535Z" level=error msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" failed" error="failed to destroy network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.143828 kubelet[3185]: E0314 00:23:54.143764 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:23:54.143958 kubelet[3185]: E0314 00:23:54.143840 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6"} Mar 14 00:23:54.143958 kubelet[3185]: E0314 00:23:54.143881 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d618a51-6829-430e-b49a-66d79e5d6bd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.143958 kubelet[3185]: E0314 00:23:54.143914 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d618a51-6829-430e-b49a-66d79e5d6bd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" podUID="9d618a51-6829-430e-b49a-66d79e5d6bd9" Mar 14 00:23:54.153999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13-shm.mount: Deactivated successfully. Mar 14 00:23:54.154426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc-shm.mount: Deactivated successfully. Mar 14 00:23:54.214854 containerd[1977]: time="2026-03-14T00:23:54.214592179Z" level=error msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" failed" error="failed to destroy network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.215147 kubelet[3185]: E0314 00:23:54.214904 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:23:54.215147 kubelet[3185]: E0314 00:23:54.214953 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd"} Mar 14 00:23:54.215147 kubelet[3185]: E0314 00:23:54.215005 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"599d0c23-2c01-40d7-91a9-4eddcc457e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.215147 kubelet[3185]: E0314 00:23:54.215040 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"599d0c23-2c01-40d7-91a9-4eddcc457e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-n76gs" podUID="599d0c23-2c01-40d7-91a9-4eddcc457e9d" Mar 14 00:23:54.216657 containerd[1977]: time="2026-03-14T00:23:54.216093392Z" level=error msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" failed" error="failed to destroy network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.216769 kubelet[3185]: E0314 00:23:54.216306 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:54.216769 kubelet[3185]: E0314 00:23:54.216348 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc"} Mar 14 00:23:54.216769 kubelet[3185]: E0314 00:23:54.216381 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2191c208-e075-4e83-8a69-75c0e78faf73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.216769 kubelet[3185]: E0314 00:23:54.216415 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2191c208-e075-4e83-8a69-75c0e78faf73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645f7ff588-6rtfv" podUID="2191c208-e075-4e83-8a69-75c0e78faf73" Mar 14 00:23:54.239675 containerd[1977]: time="2026-03-14T00:23:54.239167051Z" level=error msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" failed" error="failed to destroy network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.239959 kubelet[3185]: E0314 00:23:54.239478 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:23:54.239959 kubelet[3185]: E0314 00:23:54.239530 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13"} Mar 14 00:23:54.239959 kubelet[3185]: E0314 00:23:54.239568 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29b0b1cc-979c-471a-96de-036f619bea96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.239959 kubelet[3185]: E0314 00:23:54.239607 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29b0b1cc-979c-471a-96de-036f619bea96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" podUID="29b0b1cc-979c-471a-96de-036f619bea96" Mar 14 00:23:54.248448 containerd[1977]: time="2026-03-14T00:23:54.248393425Z" level=error msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" failed" error="failed to destroy network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.248961 kubelet[3185]: E0314 00:23:54.248911 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:23:54.249075 kubelet[3185]: E0314 00:23:54.248971 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879"} Mar 14 00:23:54.249075 kubelet[3185]: E0314 00:23:54.249007 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.249075 kubelet[3185]: E0314 00:23:54.249052 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fcc26c0-21bc-4ace-9bc8-3087de8102bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q5zzm" podUID="8fcc26c0-21bc-4ace-9bc8-3087de8102bb" Mar 14 00:23:54.249451 containerd[1977]: time="2026-03-14T00:23:54.249292709Z" level=error msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" failed" error="failed to destroy network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.249451 containerd[1977]: time="2026-03-14T00:23:54.249359249Z" level=error msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" failed" error="failed to destroy network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:23:54.249620 kubelet[3185]: E0314 00:23:54.249516 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:23:54.249620 kubelet[3185]: E0314 00:23:54.249558 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc"} Mar 14 00:23:54.249620 kubelet[3185]: E0314 00:23:54.249590 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0736b4c3-59d4-4880-ac34-375e0fee379d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.249870 kubelet[3185]: E0314 00:23:54.249623 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0736b4c3-59d4-4880-ac34-375e0fee379d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qz4q6" podUID="0736b4c3-59d4-4880-ac34-375e0fee379d" Mar 14 00:23:54.249870 kubelet[3185]: E0314 00:23:54.249699 3185 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:23:54.249870 kubelet[3185]: E0314 00:23:54.249723 3185 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319"} Mar 14 00:23:54.249870 kubelet[3185]: E0314 00:23:54.249749 3185 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:23:54.250102 kubelet[3185]: E0314 00:23:54.249775 3185 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" podUID="5d5616e1-d343-4d2f-a167-fc5c7ebcfeec" Mar 14 00:23:56.089381 kubelet[3185]: I0314 00:23:56.089344 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:23:58.853146 systemd[1]: run-containerd-runc-k8s.io-a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2-runc.OLR0WM.mount: Deactivated successfully. Mar 14 00:23:59.329836 containerd[1977]: time="2026-03-14T00:23:59.329338660Z" level=info msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.483 [INFO][4606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.483 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" iface="eth0" netns="/var/run/netns/cni-f82e8833-29fd-9a42-2d27-7933a7394ef8" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.484 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" iface="eth0" netns="/var/run/netns/cni-f82e8833-29fd-9a42-2d27-7933a7394ef8" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.485 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" iface="eth0" netns="/var/run/netns/cni-f82e8833-29fd-9a42-2d27-7933a7394ef8" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.485 [INFO][4606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.485 [INFO][4606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.562 [INFO][4619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.563 [INFO][4619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.563 [INFO][4619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.573 [WARNING][4619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.573 [INFO][4619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.576 [INFO][4619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:23:59.582852 containerd[1977]: 2026-03-14 00:23:59.580 [INFO][4606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:23:59.587412 containerd[1977]: time="2026-03-14T00:23:59.584775110Z" level=info msg="TearDown network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" successfully" Mar 14 00:23:59.587412 containerd[1977]: time="2026-03-14T00:23:59.584837624Z" level=info msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" returns successfully" Mar 14 00:23:59.587488 systemd[1]: run-netns-cni\x2df82e8833\x2d29fd\x2d9a42\x2d2d27\x2d7933a7394ef8.mount: Deactivated successfully. Mar 14 00:23:59.693224 kubelet[3185]: I0314 00:23:59.693186 3185 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-ca-bundle\") pod \"2191c208-e075-4e83-8a69-75c0e78faf73\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " Mar 14 00:23:59.693722 kubelet[3185]: I0314 00:23:59.693247 3185 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-backend-key-pair\") pod \"2191c208-e075-4e83-8a69-75c0e78faf73\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " Mar 14 00:23:59.693722 kubelet[3185]: I0314 00:23:59.693284 3185 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-nginx-config\") pod \"2191c208-e075-4e83-8a69-75c0e78faf73\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " Mar 14 00:23:59.693722 kubelet[3185]: I0314 00:23:59.693312 3185 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdstm\" (UniqueName: \"kubernetes.io/projected/2191c208-e075-4e83-8a69-75c0e78faf73-kube-api-access-fdstm\") pod \"2191c208-e075-4e83-8a69-75c0e78faf73\" (UID: \"2191c208-e075-4e83-8a69-75c0e78faf73\") " Mar 14 00:23:59.704343 systemd[1]: var-lib-kubelet-pods-2191c208\x2de075\x2d4e83\x2d8a69\x2d75c0e78faf73-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:23:59.709763 kubelet[3185]: I0314 00:23:59.709703 3185 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2191c208-e075-4e83-8a69-75c0e78faf73" (UID: "2191c208-e075-4e83-8a69-75c0e78faf73"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:59.712840 kubelet[3185]: I0314 00:23:59.711274 3185 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2191c208-e075-4e83-8a69-75c0e78faf73" (UID: "2191c208-e075-4e83-8a69-75c0e78faf73"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:59.712840 kubelet[3185]: I0314 00:23:59.702754 3185 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2191c208-e075-4e83-8a69-75c0e78faf73" (UID: "2191c208-e075-4e83-8a69-75c0e78faf73"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:23:59.715043 kubelet[3185]: I0314 00:23:59.715003 3185 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2191c208-e075-4e83-8a69-75c0e78faf73-kube-api-access-fdstm" (OuterVolumeSpecName: "kube-api-access-fdstm") pod "2191c208-e075-4e83-8a69-75c0e78faf73" (UID: "2191c208-e075-4e83-8a69-75c0e78faf73"). InnerVolumeSpecName "kube-api-access-fdstm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:23:59.717713 systemd[1]: var-lib-kubelet-pods-2191c208\x2de075\x2d4e83\x2d8a69\x2d75c0e78faf73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfdstm.mount: Deactivated successfully. Mar 14 00:23:59.794501 kubelet[3185]: I0314 00:23:59.794439 3185 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-backend-key-pair\") on node \"ip-172-31-20-55\" DevicePath \"\"" Mar 14 00:23:59.794501 kubelet[3185]: I0314 00:23:59.794479 3185 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-nginx-config\") on node \"ip-172-31-20-55\" DevicePath \"\"" Mar 14 00:23:59.794501 kubelet[3185]: I0314 00:23:59.794492 3185 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdstm\" (UniqueName: \"kubernetes.io/projected/2191c208-e075-4e83-8a69-75c0e78faf73-kube-api-access-fdstm\") on node \"ip-172-31-20-55\" DevicePath \"\"" Mar 14 00:23:59.794501 kubelet[3185]: I0314 00:23:59.794504 3185 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2191c208-e075-4e83-8a69-75c0e78faf73-whisker-ca-bundle\") on node \"ip-172-31-20-55\" DevicePath \"\"" Mar 14 00:24:00.101678 systemd[1]: Removed slice kubepods-besteffort-pod2191c208_e075_4e83_8a69_75c0e78faf73.slice - libcontainer container kubepods-besteffort-pod2191c208_e075_4e83_8a69_75c0e78faf73.slice. Mar 14 00:24:00.241050 systemd[1]: Created slice kubepods-besteffort-pod4fba136d_0d29_495a_b43f_9226861a82bf.slice - libcontainer container kubepods-besteffort-pod4fba136d_0d29_495a_b43f_9226861a82bf.slice. Mar 14 00:24:00.297672 kubelet[3185]: I0314 00:24:00.297625 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fba136d-0d29-495a-b43f-9226861a82bf-whisker-backend-key-pair\") pod \"whisker-585458d8f7-k7hfk\" (UID: \"4fba136d-0d29-495a-b43f-9226861a82bf\") " pod="calico-system/whisker-585458d8f7-k7hfk" Mar 14 00:24:00.297672 kubelet[3185]: I0314 00:24:00.297683 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4fba136d-0d29-495a-b43f-9226861a82bf-nginx-config\") pod \"whisker-585458d8f7-k7hfk\" (UID: \"4fba136d-0d29-495a-b43f-9226861a82bf\") " pod="calico-system/whisker-585458d8f7-k7hfk" Mar 14 00:24:00.297672 kubelet[3185]: I0314 00:24:00.297712 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fba136d-0d29-495a-b43f-9226861a82bf-whisker-ca-bundle\") pod \"whisker-585458d8f7-k7hfk\" (UID: \"4fba136d-0d29-495a-b43f-9226861a82bf\") " pod="calico-system/whisker-585458d8f7-k7hfk" Mar 14 00:24:00.297985 kubelet[3185]: I0314 00:24:00.297739 3185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vcl\" (UniqueName: \"kubernetes.io/projected/4fba136d-0d29-495a-b43f-9226861a82bf-kube-api-access-f4vcl\") pod \"whisker-585458d8f7-k7hfk\" (UID: \"4fba136d-0d29-495a-b43f-9226861a82bf\") " pod="calico-system/whisker-585458d8f7-k7hfk" Mar 14 00:24:00.547644 containerd[1977]: time="2026-03-14T00:24:00.547572019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585458d8f7-k7hfk,Uid:4fba136d-0d29-495a-b43f-9226861a82bf,Namespace:calico-system,Attempt:0,}" Mar 14 00:24:00.558970 kubelet[3185]: I0314 00:24:00.558590 3185 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2191c208-e075-4e83-8a69-75c0e78faf73" path="/var/lib/kubelet/pods/2191c208-e075-4e83-8a69-75c0e78faf73/volumes" Mar 14 00:24:00.800943 systemd-networkd[1794]: cali38746d40327: Link UP Mar 14 00:24:00.802209 systemd-networkd[1794]: cali38746d40327: Gained carrier Mar 14 00:24:00.815517 (udev-worker)[4708]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.597 [ERROR][4641] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.617 [INFO][4641] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0 whisker-585458d8f7- calico-system 4fba136d-0d29-495a-b43f-9226861a82bf 940 0 2026-03-14 00:24:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:585458d8f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-55 whisker-585458d8f7-k7hfk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali38746d40327 [] [] }} ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.617 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.650 [INFO][4653] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" HandleID="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Workload="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.664 [INFO][4653] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" HandleID="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Workload="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"whisker-585458d8f7-k7hfk", "timestamp":"2026-03-14 00:24:00.65044943 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d9340)} Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.664 [INFO][4653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.664 [INFO][4653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.664 [INFO][4653] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.670 [INFO][4653] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.678 [INFO][4653] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.684 [INFO][4653] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.687 [INFO][4653] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.689 [INFO][4653] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.690 [INFO][4653] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.692 [INFO][4653] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.697 [INFO][4653] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.707 [INFO][4653] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.1/26] block=192.168.67.0/26 handle="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.708 [INFO][4653] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.1/26] handle="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" host="ip-172-31-20-55" Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.708 [INFO][4653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:00.837881 containerd[1977]: 2026-03-14 00:24:00.708 [INFO][4653] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.1/26] IPv6=[] ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" HandleID="k8s-pod-network.0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Workload="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.710 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0", GenerateName:"whisker-585458d8f7-", Namespace:"calico-system", SelfLink:"", UID:"4fba136d-0d29-495a-b43f-9226861a82bf", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585458d8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"whisker-585458d8f7-k7hfk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38746d40327", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.710 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.1/32] ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.710 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38746d40327 ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.802 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.803 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0", GenerateName:"whisker-585458d8f7-", Namespace:"calico-system", SelfLink:"", UID:"4fba136d-0d29-495a-b43f-9226861a82bf", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585458d8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc", Pod:"whisker-585458d8f7-k7hfk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38746d40327", MAC:"be:b2:bd:8d:9d:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:00.839856 containerd[1977]: 2026-03-14 00:24:00.829 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc" Namespace="calico-system" Pod="whisker-585458d8f7-k7hfk" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--585458d8f7--k7hfk-eth0" Mar 14 00:24:00.898867 containerd[1977]: time="2026-03-14T00:24:00.898438261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:00.898867 containerd[1977]: time="2026-03-14T00:24:00.898518425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:00.898867 containerd[1977]: time="2026-03-14T00:24:00.898540788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:00.898867 containerd[1977]: time="2026-03-14T00:24:00.898674781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:00.941828 systemd[1]: Started cri-containerd-0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc.scope - libcontainer container 0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc. Mar 14 00:24:01.016160 containerd[1977]: time="2026-03-14T00:24:01.016035934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585458d8f7-k7hfk,Uid:4fba136d-0d29-495a-b43f-9226861a82bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc\"" Mar 14 00:24:01.020897 containerd[1977]: time="2026-03-14T00:24:01.020478285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:24:01.220842 kubelet[3185]: I0314 00:24:01.220647 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:02.035495 systemd-networkd[1794]: cali38746d40327: Gained IPv6LL Mar 14 00:24:02.172956 kernel: calico-node[4684]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:24:03.276464 systemd-networkd[1794]: vxlan.calico: Link UP Mar 14 00:24:03.276477 systemd-networkd[1794]: vxlan.calico: Gained carrier Mar 14 00:24:03.400999 (udev-worker)[4707]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:03.408423 (udev-worker)[4894]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:03.878047 containerd[1977]: time="2026-03-14T00:24:03.877950374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:24:03.925324 containerd[1977]: time="2026-03-14T00:24:03.925088014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:03.934138 containerd[1977]: time="2026-03-14T00:24:03.934072954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.904743369s" Mar 14 00:24:03.935181 containerd[1977]: time="2026-03-14T00:24:03.934145215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:24:03.981920 containerd[1977]: time="2026-03-14T00:24:03.981870299Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:03.984260 containerd[1977]: time="2026-03-14T00:24:03.984193806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:03.992676 containerd[1977]: time="2026-03-14T00:24:03.992622458Z" level=info msg="CreateContainer within sandbox \"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:24:04.097841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167390308.mount: Deactivated successfully. Mar 14 00:24:04.101775 containerd[1977]: time="2026-03-14T00:24:04.101734808Z" level=info msg="CreateContainer within sandbox \"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913\"" Mar 14 00:24:04.102886 containerd[1977]: time="2026-03-14T00:24:04.102441750Z" level=info msg="StartContainer for \"efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913\"" Mar 14 00:24:04.273193 systemd[1]: run-containerd-runc-k8s.io-efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913-runc.p2TCsu.mount: Deactivated successfully. Mar 14 00:24:04.284028 systemd[1]: Started cri-containerd-efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913.scope - libcontainer container efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913. Mar 14 00:24:04.331220 containerd[1977]: time="2026-03-14T00:24:04.331170856Z" level=info msg="StartContainer for \"efa512c67a1fc132d093591a3f54431a24e2eaee2728a6c8c8009d14c2d16913\" returns successfully" Mar 14 00:24:04.373833 containerd[1977]: time="2026-03-14T00:24:04.373245574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:24:04.558413 containerd[1977]: time="2026-03-14T00:24:04.558286090Z" level=info msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" Mar 14 00:24:04.560648 containerd[1977]: time="2026-03-14T00:24:04.559438461Z" level=info msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.857 [INFO][5002] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.858 [INFO][5002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" iface="eth0" netns="/var/run/netns/cni-36525aaa-0162-8403-bd9c-829dd4589dad" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.859 [INFO][5002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" iface="eth0" netns="/var/run/netns/cni-36525aaa-0162-8403-bd9c-829dd4589dad" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.859 [INFO][5002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" iface="eth0" netns="/var/run/netns/cni-36525aaa-0162-8403-bd9c-829dd4589dad" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.859 [INFO][5002] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:04.859 [INFO][5002] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.046 [INFO][5016] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.047 [INFO][5016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.047 [INFO][5016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.056 [WARNING][5016] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.056 [INFO][5016] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.062 [INFO][5016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:05.070462 containerd[1977]: 2026-03-14 00:24:05.066 [INFO][5002] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:05.076514 containerd[1977]: time="2026-03-14T00:24:05.076464677Z" level=info msg="TearDown network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" successfully" Mar 14 00:24:05.076643 containerd[1977]: time="2026-03-14T00:24:05.076525481Z" level=info msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" returns successfully" Mar 14 00:24:05.079729 containerd[1977]: time="2026-03-14T00:24:05.079693738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-ntpm7,Uid:29b0b1cc-979c-471a-96de-036f619bea96,Namespace:calico-system,Attempt:1,}" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.881 [INFO][5001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.882 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" iface="eth0" netns="/var/run/netns/cni-747b20c9-cfac-910d-ca04-93450efda5f8" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.882 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" iface="eth0" netns="/var/run/netns/cni-747b20c9-cfac-910d-ca04-93450efda5f8" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.884 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" iface="eth0" netns="/var/run/netns/cni-747b20c9-cfac-910d-ca04-93450efda5f8" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.884 [INFO][5001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:04.884 [INFO][5001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.043 [INFO][5021] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.048 [INFO][5021] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.062 [INFO][5021] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.077 [WARNING][5021] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.077 [INFO][5021] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.081 [INFO][5021] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:05.087160 containerd[1977]: 2026-03-14 00:24:05.084 [INFO][5001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:05.090544 containerd[1977]: time="2026-03-14T00:24:05.087581129Z" level=info msg="TearDown network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" successfully" Mar 14 00:24:05.090544 containerd[1977]: time="2026-03-14T00:24:05.087634630Z" level=info msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" returns successfully" Mar 14 00:24:05.094394 containerd[1977]: time="2026-03-14T00:24:05.094346019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-8rmqt,Uid:9d618a51-6829-430e-b49a-66d79e5d6bd9,Namespace:calico-system,Attempt:1,}" Mar 14 00:24:05.095493 systemd[1]: run-netns-cni\x2d36525aaa\x2d0162\x2d8403\x2dbd9c\x2d829dd4589dad.mount: Deactivated successfully. Mar 14 00:24:05.095916 systemd[1]: run-netns-cni\x2d747b20c9\x2dcfac\x2d910d\x2dca04\x2d93450efda5f8.mount: Deactivated successfully. Mar 14 00:24:05.153442 systemd[1]: Started sshd@7-172.31.20.55:22-68.220.241.50:57074.service - OpenSSH per-connection server daemon (68.220.241.50:57074). Mar 14 00:24:05.316904 systemd-networkd[1794]: vxlan.calico: Gained IPv6LL Mar 14 00:24:05.416607 (udev-worker)[4902]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:05.418162 systemd-networkd[1794]: calidbe72da6b95: Link UP Mar 14 00:24:05.419337 systemd-networkd[1794]: calidbe72da6b95: Gained carrier Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.254 [INFO][5047] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0 calico-apiserver-84dcd48bcb- calico-system 9d618a51-6829-430e-b49a-66d79e5d6bd9 977 0 2026-03-14 00:23:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84dcd48bcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-55 calico-apiserver-84dcd48bcb-8rmqt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidbe72da6b95 [] [] }} ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.254 [INFO][5047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.308 [INFO][5066] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" HandleID="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.337 [INFO][5066] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" HandleID="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"calico-apiserver-84dcd48bcb-8rmqt", "timestamp":"2026-03-14 00:24:05.308689503 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003738c0)} Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.337 [INFO][5066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.338 [INFO][5066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.338 [INFO][5066] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.344 [INFO][5066] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.358 [INFO][5066] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.365 [INFO][5066] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.368 [INFO][5066] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.372 [INFO][5066] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.372 [INFO][5066] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.378 [INFO][5066] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.398 [INFO][5066] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.409 [INFO][5066] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.2/26] block=192.168.67.0/26 handle="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.409 [INFO][5066] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.2/26] handle="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" host="ip-172-31-20-55" Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.409 [INFO][5066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:05.445913 containerd[1977]: 2026-03-14 00:24:05.409 [INFO][5066] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.2/26] IPv6=[] ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" HandleID="k8s-pod-network.bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.413 [INFO][5047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"9d618a51-6829-430e-b49a-66d79e5d6bd9", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"calico-apiserver-84dcd48bcb-8rmqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidbe72da6b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.413 [INFO][5047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.2/32] ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.413 [INFO][5047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbe72da6b95 ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.420 [INFO][5047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.421 [INFO][5047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"9d618a51-6829-430e-b49a-66d79e5d6bd9", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a", Pod:"calico-apiserver-84dcd48bcb-8rmqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidbe72da6b95", MAC:"7a:29:d9:23:fe:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:05.447410 containerd[1977]: 2026-03-14 00:24:05.439 [INFO][5047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-8rmqt" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:05.502390 containerd[1977]: time="2026-03-14T00:24:05.499103391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:05.502390 containerd[1977]: time="2026-03-14T00:24:05.500677998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:05.502390 containerd[1977]: time="2026-03-14T00:24:05.500697763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:05.502390 containerd[1977]: time="2026-03-14T00:24:05.500802914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:05.551933 systemd[1]: Started cri-containerd-bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a.scope - libcontainer container bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a. Mar 14 00:24:05.557111 containerd[1977]: time="2026-03-14T00:24:05.556754358Z" level=info msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" Mar 14 00:24:05.566169 containerd[1977]: time="2026-03-14T00:24:05.565083716Z" level=info msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" Mar 14 00:24:05.578016 systemd-networkd[1794]: cali36d1ced89db: Link UP Mar 14 00:24:05.583948 systemd-networkd[1794]: cali36d1ced89db: Gained carrier Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.255 [INFO][5038] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0 calico-apiserver-84dcd48bcb- calico-system 29b0b1cc-979c-471a-96de-036f619bea96 975 0 2026-03-14 00:23:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84dcd48bcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-55 calico-apiserver-84dcd48bcb-ntpm7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali36d1ced89db [] [] }} ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.255 [INFO][5038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.321 [INFO][5064] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" HandleID="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.342 [INFO][5064] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" HandleID="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"calico-apiserver-84dcd48bcb-ntpm7", "timestamp":"2026-03-14 00:24:05.321082747 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e4f20)} Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.342 [INFO][5064] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.409 [INFO][5064] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.410 [INFO][5064] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.445 [INFO][5064] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.463 [INFO][5064] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.490 [INFO][5064] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.496 [INFO][5064] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.502 [INFO][5064] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.502 [INFO][5064] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.510 [INFO][5064] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.521 [INFO][5064] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.542 [INFO][5064] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.3/26] block=192.168.67.0/26 handle="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.542 [INFO][5064] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.3/26] handle="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" host="ip-172-31-20-55" Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.544 [INFO][5064] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:05.649761 containerd[1977]: 2026-03-14 00:24:05.544 [INFO][5064] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.3/26] IPv6=[] ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" HandleID="k8s-pod-network.4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.558 [INFO][5038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"29b0b1cc-979c-471a-96de-036f619bea96", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"calico-apiserver-84dcd48bcb-ntpm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36d1ced89db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.558 [INFO][5038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.3/32] ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.559 [INFO][5038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36d1ced89db ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.588 [INFO][5038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.589 [INFO][5038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"29b0b1cc-979c-471a-96de-036f619bea96", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a", Pod:"calico-apiserver-84dcd48bcb-ntpm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36d1ced89db", MAC:"a2:1c:a6:78:5e:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:05.652658 containerd[1977]: 2026-03-14 00:24:05.634 [INFO][5038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a" Namespace="calico-system" Pod="calico-apiserver-84dcd48bcb-ntpm7" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:05.743584 sshd[5046]: Accepted publickey for core from 68.220.241.50 port 57074 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:05.750314 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:05.766047 systemd-logind[1952]: New session 8 of user core. Mar 14 00:24:05.770006 containerd[1977]: time="2026-03-14T00:24:05.763617682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:05.770006 containerd[1977]: time="2026-03-14T00:24:05.763674345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:05.770006 containerd[1977]: time="2026-03-14T00:24:05.763691463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:05.770006 containerd[1977]: time="2026-03-14T00:24:05.763787726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:05.772579 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:24:05.838756 containerd[1977]: time="2026-03-14T00:24:05.837715878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-8rmqt,Uid:9d618a51-6829-430e-b49a-66d79e5d6bd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a\"" Mar 14 00:24:05.838188 systemd[1]: Started cri-containerd-4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a.scope - libcontainer container 4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a. Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.810 [INFO][5147] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.810 [INFO][5147] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" iface="eth0" netns="/var/run/netns/cni-eb557d26-bc39-25d7-dcc9-2548dab09e2a" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.812 [INFO][5147] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" iface="eth0" netns="/var/run/netns/cni-eb557d26-bc39-25d7-dcc9-2548dab09e2a" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.813 [INFO][5147] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" iface="eth0" netns="/var/run/netns/cni-eb557d26-bc39-25d7-dcc9-2548dab09e2a" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.813 [INFO][5147] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.813 [INFO][5147] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.884 [INFO][5207] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.885 [INFO][5207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.885 [INFO][5207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.908 [WARNING][5207] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.908 [INFO][5207] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.913 [INFO][5207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:05.922463 containerd[1977]: 2026-03-14 00:24:05.916 [INFO][5147] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:05.925629 containerd[1977]: time="2026-03-14T00:24:05.922632222Z" level=info msg="TearDown network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" successfully" Mar 14 00:24:05.925629 containerd[1977]: time="2026-03-14T00:24:05.922664245Z" level=info msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" returns successfully" Mar 14 00:24:05.928987 containerd[1977]: time="2026-03-14T00:24:05.928749147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n76gs,Uid:599d0c23-2c01-40d7-91a9-4eddcc457e9d,Namespace:kube-system,Attempt:1,}" Mar 14 00:24:06.102829 systemd[1]: run-netns-cni\x2deb557d26\x2dbc39\x2d25d7\x2ddcc9\x2d2548dab09e2a.mount: Deactivated successfully. Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.899 [INFO][5152] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.899 [INFO][5152] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" iface="eth0" netns="/var/run/netns/cni-23d3af23-1e2e-96be-2eda-015178df4c55" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.899 [INFO][5152] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" iface="eth0" netns="/var/run/netns/cni-23d3af23-1e2e-96be-2eda-015178df4c55" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.901 [INFO][5152] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" iface="eth0" netns="/var/run/netns/cni-23d3af23-1e2e-96be-2eda-015178df4c55" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.901 [INFO][5152] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:05.901 [INFO][5152] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.005 [INFO][5225] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.006 [INFO][5225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.006 [INFO][5225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.074 [WARNING][5225] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.074 [INFO][5225] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.078 [INFO][5225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:06.120059 containerd[1977]: 2026-03-14 00:24:06.112 [INFO][5152] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:06.125525 containerd[1977]: time="2026-03-14T00:24:06.124678336Z" level=info msg="TearDown network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" successfully" Mar 14 00:24:06.125525 containerd[1977]: time="2026-03-14T00:24:06.124725092Z" level=info msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" returns successfully" Mar 14 00:24:06.127765 systemd[1]: run-netns-cni\x2d23d3af23\x2d1e2e\x2d96be\x2d2eda\x2d015178df4c55.mount: Deactivated successfully. Mar 14 00:24:06.128369 containerd[1977]: time="2026-03-14T00:24:06.128229832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dcd48bcb-ntpm7,Uid:29b0b1cc-979c-471a-96de-036f619bea96,Namespace:calico-system,Attempt:1,} returns sandbox id \"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a\"" Mar 14 00:24:06.135875 containerd[1977]: time="2026-03-14T00:24:06.134924259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5zzm,Uid:8fcc26c0-21bc-4ace-9bc8-3087de8102bb,Namespace:calico-system,Attempt:1,}" Mar 14 00:24:06.453708 systemd-networkd[1794]: calic7a07507789: Link UP Mar 14 00:24:06.459100 systemd-networkd[1794]: calic7a07507789: Gained carrier Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.242 [INFO][5237] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0 coredns-66bc5c9577- kube-system 599d0c23-2c01-40d7-91a9-4eddcc457e9d 1007 0 2026-03-14 00:23:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-55 coredns-66bc5c9577-n76gs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic7a07507789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.242 [INFO][5237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.313 [INFO][5272] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" HandleID="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.333 [INFO][5272] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" HandleID="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-55", "pod":"coredns-66bc5c9577-n76gs", "timestamp":"2026-03-14 00:24:06.313182462 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00002a000)} Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.333 [INFO][5272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.333 [INFO][5272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.333 [INFO][5272] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.340 [INFO][5272] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.361 [INFO][5272] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.379 [INFO][5272] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.387 [INFO][5272] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.398 [INFO][5272] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.398 [INFO][5272] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.407 [INFO][5272] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.419 [INFO][5272] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.430 [INFO][5272] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.4/26] block=192.168.67.0/26 handle="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.430 [INFO][5272] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.4/26] handle="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" host="ip-172-31-20-55" Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.432 [INFO][5272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:06.496653 containerd[1977]: 2026-03-14 00:24:06.432 [INFO][5272] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.4/26] IPv6=[] ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" HandleID="k8s-pod-network.7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.498485 containerd[1977]: 2026-03-14 00:24:06.437 [INFO][5237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"599d0c23-2c01-40d7-91a9-4eddcc457e9d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"coredns-66bc5c9577-n76gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a07507789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:06.498485 containerd[1977]: 2026-03-14 00:24:06.437 [INFO][5237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.4/32] ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.498485 containerd[1977]: 2026-03-14 00:24:06.438 [INFO][5237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7a07507789 ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.498485 containerd[1977]: 2026-03-14 00:24:06.461 [INFO][5237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.498485 containerd[1977]: 2026-03-14 00:24:06.461 [INFO][5237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"599d0c23-2c01-40d7-91a9-4eddcc457e9d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e", Pod:"coredns-66bc5c9577-n76gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a07507789", MAC:"7e:df:83:01:6a:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:06.499156 containerd[1977]: 2026-03-14 00:24:06.487 [INFO][5237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e" Namespace="kube-system" Pod="coredns-66bc5c9577-n76gs" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:06.597722 systemd-networkd[1794]: cali724f469e255: Link UP Mar 14 00:24:06.607193 systemd-networkd[1794]: cali724f469e255: Gained carrier Mar 14 00:24:06.666073 containerd[1977]: time="2026-03-14T00:24:06.665238135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:06.666073 containerd[1977]: time="2026-03-14T00:24:06.665315990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:06.666073 containerd[1977]: time="2026-03-14T00:24:06.665336117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:06.670012 containerd[1977]: time="2026-03-14T00:24:06.669949050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.288 [INFO][5262] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0 csi-node-driver- calico-system 8fcc26c0-21bc-4ace-9bc8-3087de8102bb 1013 0 2026-03-14 00:23:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-55 csi-node-driver-q5zzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali724f469e255 [] [] }} ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.288 [INFO][5262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.382 [INFO][5283] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" HandleID="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.413 [INFO][5283] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" HandleID="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"csi-node-driver-q5zzm", "timestamp":"2026-03-14 00:24:06.382446681 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000206580)} Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.413 [INFO][5283] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.431 [INFO][5283] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.431 [INFO][5283] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.439 [INFO][5283] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.468 [INFO][5283] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.485 [INFO][5283] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.491 [INFO][5283] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.497 [INFO][5283] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.497 [INFO][5283] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.502 [INFO][5283] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76 Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.523 [INFO][5283] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.571 [INFO][5283] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.5/26] block=192.168.67.0/26 handle="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.572 [INFO][5283] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.5/26] handle="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" host="ip-172-31-20-55" Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.572 [INFO][5283] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:06.689584 containerd[1977]: 2026-03-14 00:24:06.572 [INFO][5283] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.5/26] IPv6=[] ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" HandleID="k8s-pod-network.dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.590 [INFO][5262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fcc26c0-21bc-4ace-9bc8-3087de8102bb", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"csi-node-driver-q5zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724f469e255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.591 [INFO][5262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.5/32] ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.591 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724f469e255 ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.605 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.607 [INFO][5262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fcc26c0-21bc-4ace-9bc8-3087de8102bb", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76", Pod:"csi-node-driver-q5zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724f469e255", MAC:"32:a9:b4:3b:b5:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:06.692857 containerd[1977]: 2026-03-14 00:24:06.685 [INFO][5262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76" Namespace="calico-system" Pod="csi-node-driver-q5zzm" WorkloadEndpoint="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:06.741063 systemd[1]: Started cri-containerd-7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e.scope - libcontainer container 7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e. Mar 14 00:24:06.825674 containerd[1977]: time="2026-03-14T00:24:06.822632356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:06.825674 containerd[1977]: time="2026-03-14T00:24:06.822705100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:06.825674 containerd[1977]: time="2026-03-14T00:24:06.822726493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:06.825674 containerd[1977]: time="2026-03-14T00:24:06.824086026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:06.874065 systemd[1]: Started cri-containerd-dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76.scope - libcontainer container dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76. Mar 14 00:24:06.913612 containerd[1977]: time="2026-03-14T00:24:06.913505941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n76gs,Uid:599d0c23-2c01-40d7-91a9-4eddcc457e9d,Namespace:kube-system,Attempt:1,} returns sandbox id \"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e\"" Mar 14 00:24:06.931174 containerd[1977]: time="2026-03-14T00:24:06.931134721Z" level=info msg="CreateContainer within sandbox \"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:06.995631 containerd[1977]: time="2026-03-14T00:24:06.994943986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5zzm,Uid:8fcc26c0-21bc-4ace-9bc8-3087de8102bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76\"" Mar 14 00:24:07.126118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917338358.mount: Deactivated successfully. Mar 14 00:24:07.127186 containerd[1977]: time="2026-03-14T00:24:07.127058825Z" level=info msg="CreateContainer within sandbox \"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0682db2cb16492ca94459c75821dd63c626d9154efee016f53f2ab711cc57f32\"" Mar 14 00:24:07.140748 containerd[1977]: time="2026-03-14T00:24:07.140688963Z" level=info msg="StartContainer for \"0682db2cb16492ca94459c75821dd63c626d9154efee016f53f2ab711cc57f32\"" Mar 14 00:24:07.215373 systemd[1]: Started cri-containerd-0682db2cb16492ca94459c75821dd63c626d9154efee016f53f2ab711cc57f32.scope - libcontainer container 0682db2cb16492ca94459c75821dd63c626d9154efee016f53f2ab711cc57f32. Mar 14 00:24:07.288423 containerd[1977]: time="2026-03-14T00:24:07.288310912Z" level=info msg="StartContainer for \"0682db2cb16492ca94459c75821dd63c626d9154efee016f53f2ab711cc57f32\" returns successfully" Mar 14 00:24:07.301908 systemd-networkd[1794]: cali36d1ced89db: Gained IPv6LL Mar 14 00:24:07.365972 systemd-networkd[1794]: calidbe72da6b95: Gained IPv6LL Mar 14 00:24:07.423502 sshd[5046]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:07.425024 kubelet[3185]: I0314 00:24:07.424158 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n76gs" podStartSLOduration=48.42412576 podStartE2EDuration="48.42412576s" podCreationTimestamp="2026-03-14 00:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:07.422421901 +0000 UTC m=+55.058619188" watchObservedRunningTime="2026-03-14 00:24:07.42412576 +0000 UTC m=+55.060323046" Mar 14 00:24:07.433958 systemd[1]: sshd@7-172.31.20.55:22-68.220.241.50:57074.service: Deactivated successfully. Mar 14 00:24:07.444088 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:24:07.448754 systemd-logind[1952]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:24:07.451393 systemd-logind[1952]: Removed session 8. Mar 14 00:24:07.556860 containerd[1977]: time="2026-03-14T00:24:07.556573172Z" level=info msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" Mar 14 00:24:07.579665 containerd[1977]: time="2026-03-14T00:24:07.579606516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:07.581234 containerd[1977]: time="2026-03-14T00:24:07.581155805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:24:07.582257 containerd[1977]: time="2026-03-14T00:24:07.582219004Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:07.589890 containerd[1977]: time="2026-03-14T00:24:07.589846876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:07.590694 containerd[1977]: time="2026-03-14T00:24:07.590654117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.217360903s" Mar 14 00:24:07.590694 containerd[1977]: time="2026-03-14T00:24:07.590689187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:24:07.593623 containerd[1977]: time="2026-03-14T00:24:07.593385336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:24:07.605691 containerd[1977]: time="2026-03-14T00:24:07.605645621Z" level=info msg="CreateContainer within sandbox \"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:24:07.628342 containerd[1977]: time="2026-03-14T00:24:07.628248290Z" level=info msg="CreateContainer within sandbox \"0ea20d76177fbacda9b5c4118471616ae2e43e30b3445e91f085ee3838091cbc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bd3ca37c6c9857e9ccb61150272db8e1573a31eb4647f11090cfd26b9e6e4215\"" Mar 14 00:24:07.634619 containerd[1977]: time="2026-03-14T00:24:07.634043542Z" level=info msg="StartContainer for \"bd3ca37c6c9857e9ccb61150272db8e1573a31eb4647f11090cfd26b9e6e4215\"" Mar 14 00:24:07.688062 systemd[1]: Started cri-containerd-bd3ca37c6c9857e9ccb61150272db8e1573a31eb4647f11090cfd26b9e6e4215.scope - libcontainer container bd3ca37c6c9857e9ccb61150272db8e1573a31eb4647f11090cfd26b9e6e4215. Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.643 [INFO][5471] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.644 [INFO][5471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" iface="eth0" netns="/var/run/netns/cni-02832cc7-5eb1-3253-4bcf-6b6da183be04" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.645 [INFO][5471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" iface="eth0" netns="/var/run/netns/cni-02832cc7-5eb1-3253-4bcf-6b6da183be04" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.646 [INFO][5471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" iface="eth0" netns="/var/run/netns/cni-02832cc7-5eb1-3253-4bcf-6b6da183be04" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.646 [INFO][5471] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.646 [INFO][5471] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.692 [INFO][5482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.692 [INFO][5482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.692 [INFO][5482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.706 [WARNING][5482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.707 [INFO][5482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.709 [INFO][5482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:07.715353 containerd[1977]: 2026-03-14 00:24:07.711 [INFO][5471] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:07.716312 containerd[1977]: time="2026-03-14T00:24:07.715547700Z" level=info msg="TearDown network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" successfully" Mar 14 00:24:07.716312 containerd[1977]: time="2026-03-14T00:24:07.715607544Z" level=info msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" returns successfully" Mar 14 00:24:07.720308 containerd[1977]: time="2026-03-14T00:24:07.719765484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8b9cffb-kn7jv,Uid:5d5616e1-d343-4d2f-a167-fc5c7ebcfeec,Namespace:calico-system,Attempt:1,}" Mar 14 00:24:07.771948 containerd[1977]: time="2026-03-14T00:24:07.771782700Z" level=info msg="StartContainer for \"bd3ca37c6c9857e9ccb61150272db8e1573a31eb4647f11090cfd26b9e6e4215\" returns successfully" Mar 14 00:24:07.887059 systemd-networkd[1794]: calic69eff16bb7: Link UP Mar 14 00:24:07.889452 systemd-networkd[1794]: calic69eff16bb7: Gained carrier Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.794 [INFO][5512] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0 calico-kube-controllers-d8b9cffb- calico-system 5d5616e1-d343-4d2f-a167-fc5c7ebcfeec 1038 0 2026-03-14 00:23:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d8b9cffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-55 calico-kube-controllers-d8b9cffb-kn7jv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic69eff16bb7 [] [] }} ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.795 [INFO][5512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.836 [INFO][5535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" HandleID="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.843 [INFO][5535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" HandleID="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"calico-kube-controllers-d8b9cffb-kn7jv", "timestamp":"2026-03-14 00:24:07.836074625 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e2f20)} Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.843 [INFO][5535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.843 [INFO][5535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.844 [INFO][5535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.846 [INFO][5535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.851 [INFO][5535] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.856 [INFO][5535] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.858 [INFO][5535] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.861 [INFO][5535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.861 [INFO][5535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.863 [INFO][5535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01 Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.868 [INFO][5535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.879 [INFO][5535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.6/26] block=192.168.67.0/26 handle="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.879 [INFO][5535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.6/26] handle="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" host="ip-172-31-20-55" Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.879 [INFO][5535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:07.924150 containerd[1977]: 2026-03-14 00:24:07.879 [INFO][5535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.6/26] IPv6=[] ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" HandleID="k8s-pod-network.d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.882 [INFO][5512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0", GenerateName:"calico-kube-controllers-d8b9cffb-", Namespace:"calico-system", SelfLink:"", UID:"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8b9cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"calico-kube-controllers-d8b9cffb-kn7jv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic69eff16bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.882 [INFO][5512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.6/32] ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.882 [INFO][5512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic69eff16bb7 ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.887 [INFO][5512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.887 [INFO][5512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0", GenerateName:"calico-kube-controllers-d8b9cffb-", Namespace:"calico-system", SelfLink:"", UID:"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8b9cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01", Pod:"calico-kube-controllers-d8b9cffb-kn7jv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic69eff16bb7", MAC:"fe:a8:0f:b9:3f:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:07.926125 containerd[1977]: 2026-03-14 00:24:07.920 [INFO][5512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01" Namespace="calico-system" Pod="calico-kube-controllers-d8b9cffb-kn7jv" WorkloadEndpoint="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:07.961844 containerd[1977]: time="2026-03-14T00:24:07.960593299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:07.961844 containerd[1977]: time="2026-03-14T00:24:07.960661240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:07.961844 containerd[1977]: time="2026-03-14T00:24:07.960697281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:07.961844 containerd[1977]: time="2026-03-14T00:24:07.960855807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:07.999529 systemd[1]: Started cri-containerd-d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01.scope - libcontainer container d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01. Mar 14 00:24:08.068340 systemd-networkd[1794]: cali724f469e255: Gained IPv6LL Mar 14 00:24:08.098659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407859657.mount: Deactivated successfully. Mar 14 00:24:08.098789 systemd[1]: run-netns-cni\x2d02832cc7\x2d5eb1\x2d3253\x2d4bcf\x2d6b6da183be04.mount: Deactivated successfully. Mar 14 00:24:08.124835 containerd[1977]: time="2026-03-14T00:24:08.124614276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8b9cffb-kn7jv,Uid:5d5616e1-d343-4d2f-a167-fc5c7ebcfeec,Namespace:calico-system,Attempt:1,} returns sandbox id \"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01\"" Mar 14 00:24:08.516634 systemd-networkd[1794]: calic7a07507789: Gained IPv6LL Mar 14 00:24:08.558977 containerd[1977]: time="2026-03-14T00:24:08.557949345Z" level=info msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" Mar 14 00:24:08.558977 containerd[1977]: time="2026-03-14T00:24:08.558395362Z" level=info msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" Mar 14 00:24:08.675416 kubelet[3185]: I0314 00:24:08.675344 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-585458d8f7-k7hfk" podStartSLOduration=2.102305056 podStartE2EDuration="8.675319576s" podCreationTimestamp="2026-03-14 00:24:00 +0000 UTC" firstStartedPulling="2026-03-14 00:24:01.019955696 +0000 UTC m=+48.656152964" lastFinishedPulling="2026-03-14 00:24:07.592970201 +0000 UTC m=+55.229167484" observedRunningTime="2026-03-14 00:24:08.421476372 +0000 UTC m=+56.057673658" watchObservedRunningTime="2026-03-14 00:24:08.675319576 +0000 UTC m=+56.311516861" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.672 [INFO][5629] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.673 [INFO][5629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" iface="eth0" netns="/var/run/netns/cni-234e99a4-8b20-37b0-645c-32410a244974" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.673 [INFO][5629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" iface="eth0" netns="/var/run/netns/cni-234e99a4-8b20-37b0-645c-32410a244974" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.673 [INFO][5629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" iface="eth0" netns="/var/run/netns/cni-234e99a4-8b20-37b0-645c-32410a244974" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.673 [INFO][5629] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.673 [INFO][5629] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.728 [INFO][5647] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.729 [INFO][5647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.729 [INFO][5647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.743 [WARNING][5647] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.743 [INFO][5647] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.745 [INFO][5647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:08.753097 containerd[1977]: 2026-03-14 00:24:08.749 [INFO][5629] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:08.756368 containerd[1977]: time="2026-03-14T00:24:08.756306018Z" level=info msg="TearDown network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" successfully" Mar 14 00:24:08.756368 containerd[1977]: time="2026-03-14T00:24:08.756346741Z" level=info msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" returns successfully" Mar 14 00:24:08.759300 systemd[1]: run-netns-cni\x2d234e99a4\x2d8b20\x2d37b0\x2d645c\x2d32410a244974.mount: Deactivated successfully. Mar 14 00:24:08.761738 containerd[1977]: time="2026-03-14T00:24:08.761123935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qz4q6,Uid:0736b4c3-59d4-4880-ac34-375e0fee379d,Namespace:kube-system,Attempt:1,}" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.691 [INFO][5637] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.691 [INFO][5637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" iface="eth0" netns="/var/run/netns/cni-7e677cf2-6a78-baa8-d81e-62247098570c" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.693 [INFO][5637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" iface="eth0" netns="/var/run/netns/cni-7e677cf2-6a78-baa8-d81e-62247098570c" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.694 [INFO][5637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" iface="eth0" netns="/var/run/netns/cni-7e677cf2-6a78-baa8-d81e-62247098570c" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.694 [INFO][5637] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.694 [INFO][5637] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.740 [INFO][5652] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.741 [INFO][5652] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.746 [INFO][5652] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.756 [WARNING][5652] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.757 [INFO][5652] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.763 [INFO][5652] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:08.768583 containerd[1977]: 2026-03-14 00:24:08.765 [INFO][5637] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:08.769961 containerd[1977]: time="2026-03-14T00:24:08.769922676Z" level=info msg="TearDown network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" successfully" Mar 14 00:24:08.770120 containerd[1977]: time="2026-03-14T00:24:08.769960975Z" level=info msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" returns successfully" Mar 14 00:24:08.773593 containerd[1977]: time="2026-03-14T00:24:08.772775474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tmhdn,Uid:f4d4bc3f-388e-42d8-aa7a-f80349da127e,Namespace:calico-system,Attempt:1,}" Mar 14 00:24:08.776702 systemd[1]: run-netns-cni\x2d7e677cf2\x2d6a78\x2dbaa8\x2dd81e\x2d62247098570c.mount: Deactivated successfully. Mar 14 00:24:08.992250 systemd-networkd[1794]: califafc1001f62: Link UP Mar 14 00:24:08.997084 systemd-networkd[1794]: califafc1001f62: Gained carrier Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.866 [INFO][5671] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0 goldmane-cccfbd5cf- calico-system f4d4bc3f-388e-42d8-aa7a-f80349da127e 1060 0 2026-03-14 00:23:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-55 goldmane-cccfbd5cf-tmhdn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califafc1001f62 [] [] }} ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.866 [INFO][5671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.913 [INFO][5689] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" HandleID="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.926 [INFO][5689] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" HandleID="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-55", "pod":"goldmane-cccfbd5cf-tmhdn", "timestamp":"2026-03-14 00:24:08.913221793 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002834a0)} Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.926 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.926 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.926 [INFO][5689] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.930 [INFO][5689] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.938 [INFO][5689] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.949 [INFO][5689] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.952 [INFO][5689] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.955 [INFO][5689] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.955 [INFO][5689] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.957 [INFO][5689] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888 Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.971 [INFO][5689] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.984 [INFO][5689] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.7/26] block=192.168.67.0/26 handle="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.984 [INFO][5689] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.7/26] handle="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" host="ip-172-31-20-55" Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.984 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:09.028770 containerd[1977]: 2026-03-14 00:24:08.984 [INFO][5689] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.7/26] IPv6=[] ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" HandleID="k8s-pod-network.5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:08.987 [INFO][5671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4d4bc3f-388e-42d8-aa7a-f80349da127e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"goldmane-cccfbd5cf-tmhdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califafc1001f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:08.988 [INFO][5671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.7/32] ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:08.988 [INFO][5671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califafc1001f62 ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:08.993 [INFO][5671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:08.993 [INFO][5671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4d4bc3f-388e-42d8-aa7a-f80349da127e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888", Pod:"goldmane-cccfbd5cf-tmhdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califafc1001f62", MAC:"96:a0:96:e8:6d:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:09.030251 containerd[1977]: 2026-03-14 00:24:09.022 [INFO][5671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tmhdn" WorkloadEndpoint="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:09.099384 containerd[1977]: time="2026-03-14T00:24:09.098408687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:09.099384 containerd[1977]: time="2026-03-14T00:24:09.098498369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:09.099384 containerd[1977]: time="2026-03-14T00:24:09.098523036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:09.099384 containerd[1977]: time="2026-03-14T00:24:09.098623188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:09.174090 systemd-networkd[1794]: cali4a56141ec44: Link UP Mar 14 00:24:09.174343 systemd-networkd[1794]: cali4a56141ec44: Gained carrier Mar 14 00:24:09.180034 systemd[1]: Started cri-containerd-5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888.scope - libcontainer container 5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888. Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.848 [INFO][5661] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0 coredns-66bc5c9577- kube-system 0736b4c3-59d4-4880-ac34-375e0fee379d 1059 0 2026-03-14 00:23:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-55 coredns-66bc5c9577-qz4q6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a56141ec44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.848 [INFO][5661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.932 [INFO][5684] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" HandleID="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.946 [INFO][5684] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" HandleID="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-55", "pod":"coredns-66bc5c9577-qz4q6", "timestamp":"2026-03-14 00:24:08.932825889 +0000 UTC"}, Hostname:"ip-172-31-20-55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.947 [INFO][5684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.986 [INFO][5684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:08.987 [INFO][5684] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-55' Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.032 [INFO][5684] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.041 [INFO][5684] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.065 [INFO][5684] ipam/ipam.go 526: Trying affinity for 192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.073 [INFO][5684] ipam/ipam.go 160: Attempting to load block cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.080 [INFO][5684] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.67.0/26 host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.080 [INFO][5684] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.67.0/26 handle="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.088 [INFO][5684] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.102 [INFO][5684] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.67.0/26 handle="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.130 [INFO][5684] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.67.8/26] block=192.168.67.0/26 handle="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.131 [INFO][5684] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.67.8/26] handle="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" host="ip-172-31-20-55" Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.132 [INFO][5684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:09.242861 containerd[1977]: 2026-03-14 00:24:09.132 [INFO][5684] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.67.8/26] IPv6=[] ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" HandleID="k8s-pod-network.08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.246989 containerd[1977]: 2026-03-14 00:24:09.151 [INFO][5661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0736b4c3-59d4-4880-ac34-375e0fee379d", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"", Pod:"coredns-66bc5c9577-qz4q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a56141ec44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:09.246989 containerd[1977]: 2026-03-14 00:24:09.151 [INFO][5661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.8/32] ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.246989 containerd[1977]: 2026-03-14 00:24:09.151 [INFO][5661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a56141ec44 ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.246989 containerd[1977]: 2026-03-14 00:24:09.173 [INFO][5661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.246989 containerd[1977]: 2026-03-14 00:24:09.173 [INFO][5661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0736b4c3-59d4-4880-ac34-375e0fee379d", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b", Pod:"coredns-66bc5c9577-qz4q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a56141ec44", MAC:"f2:c5:ce:66:d5:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:09.247370 containerd[1977]: 2026-03-14 00:24:09.230 [INFO][5661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b" Namespace="kube-system" Pod="coredns-66bc5c9577-qz4q6" WorkloadEndpoint="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:09.322721 containerd[1977]: time="2026-03-14T00:24:09.321919387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tmhdn,Uid:f4d4bc3f-388e-42d8-aa7a-f80349da127e,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888\"" Mar 14 00:24:09.330967 containerd[1977]: time="2026-03-14T00:24:09.330403284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:09.330967 containerd[1977]: time="2026-03-14T00:24:09.330487783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:09.330967 containerd[1977]: time="2026-03-14T00:24:09.330513606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:09.330967 containerd[1977]: time="2026-03-14T00:24:09.330633817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:09.379194 systemd[1]: Started cri-containerd-08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b.scope - libcontainer container 08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b. Mar 14 00:24:09.491388 containerd[1977]: time="2026-03-14T00:24:09.491342557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qz4q6,Uid:0736b4c3-59d4-4880-ac34-375e0fee379d,Namespace:kube-system,Attempt:1,} returns sandbox id \"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b\"" Mar 14 00:24:09.503692 containerd[1977]: time="2026-03-14T00:24:09.502783566Z" level=info msg="CreateContainer within sandbox \"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:09.525896 containerd[1977]: time="2026-03-14T00:24:09.525851099Z" level=info msg="CreateContainer within sandbox \"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f7853c2b4bc8cfc832c13c46645b01a3e3d61b69da07aff2ca935b282a81864\"" Mar 14 00:24:09.528044 containerd[1977]: time="2026-03-14T00:24:09.528006140Z" level=info msg="StartContainer for \"9f7853c2b4bc8cfc832c13c46645b01a3e3d61b69da07aff2ca935b282a81864\"" Mar 14 00:24:09.613204 systemd[1]: Started cri-containerd-9f7853c2b4bc8cfc832c13c46645b01a3e3d61b69da07aff2ca935b282a81864.scope - libcontainer container 9f7853c2b4bc8cfc832c13c46645b01a3e3d61b69da07aff2ca935b282a81864. Mar 14 00:24:09.669487 systemd-networkd[1794]: calic69eff16bb7: Gained IPv6LL Mar 14 00:24:09.687182 containerd[1977]: time="2026-03-14T00:24:09.687122648Z" level=info msg="StartContainer for \"9f7853c2b4bc8cfc832c13c46645b01a3e3d61b69da07aff2ca935b282a81864\" returns successfully" Mar 14 00:24:10.110009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020423055.mount: Deactivated successfully. Mar 14 00:24:10.467172 kubelet[3185]: I0314 00:24:10.467104 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qz4q6" podStartSLOduration=52.467080907 podStartE2EDuration="52.467080907s" podCreationTimestamp="2026-03-14 00:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:10.434116424 +0000 UTC m=+58.070313717" watchObservedRunningTime="2026-03-14 00:24:10.467080907 +0000 UTC m=+58.103278188" Mar 14 00:24:10.819963 systemd-networkd[1794]: califafc1001f62: Gained IPv6LL Mar 14 00:24:10.948893 systemd-networkd[1794]: cali4a56141ec44: Gained IPv6LL Mar 14 00:24:11.265757 containerd[1977]: time="2026-03-14T00:24:11.265699093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.268151 containerd[1977]: time="2026-03-14T00:24:11.268033266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:24:11.271523 containerd[1977]: time="2026-03-14T00:24:11.271051256Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.275425 containerd[1977]: time="2026-03-14T00:24:11.275383594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.281887 containerd[1977]: time="2026-03-14T00:24:11.281843497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.68841735s" Mar 14 00:24:11.282016 containerd[1977]: time="2026-03-14T00:24:11.281891809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:24:11.283995 containerd[1977]: time="2026-03-14T00:24:11.283836875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:24:11.289022 containerd[1977]: time="2026-03-14T00:24:11.288983429Z" level=info msg="CreateContainer within sandbox \"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:24:11.349322 containerd[1977]: time="2026-03-14T00:24:11.348746521Z" level=info msg="CreateContainer within sandbox \"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1860efe08af3bf01a2502564954a9fb6d907e9d69b9fba48d91107449a92d7fb\"" Mar 14 00:24:11.350991 containerd[1977]: time="2026-03-14T00:24:11.350955305Z" level=info msg="StartContainer for \"1860efe08af3bf01a2502564954a9fb6d907e9d69b9fba48d91107449a92d7fb\"" Mar 14 00:24:11.442204 systemd[1]: Started cri-containerd-1860efe08af3bf01a2502564954a9fb6d907e9d69b9fba48d91107449a92d7fb.scope - libcontainer container 1860efe08af3bf01a2502564954a9fb6d907e9d69b9fba48d91107449a92d7fb. Mar 14 00:24:11.511315 containerd[1977]: time="2026-03-14T00:24:11.511167112Z" level=info msg="StartContainer for \"1860efe08af3bf01a2502564954a9fb6d907e9d69b9fba48d91107449a92d7fb\" returns successfully" Mar 14 00:24:11.754259 containerd[1977]: time="2026-03-14T00:24:11.754202866Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.756410 containerd[1977]: time="2026-03-14T00:24:11.756354448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:24:11.759372 containerd[1977]: time="2026-03-14T00:24:11.759318788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 475.442787ms" Mar 14 00:24:11.759483 containerd[1977]: time="2026-03-14T00:24:11.759375782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:24:11.762401 containerd[1977]: time="2026-03-14T00:24:11.760658047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:24:11.767308 containerd[1977]: time="2026-03-14T00:24:11.767211746Z" level=info msg="CreateContainer within sandbox \"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:24:11.791962 containerd[1977]: time="2026-03-14T00:24:11.791916143Z" level=info msg="CreateContainer within sandbox \"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"afc55445412bfa3902f5c82ffd4dbb3379b76847b86e6359c82beeb3376167e6\"" Mar 14 00:24:11.793123 containerd[1977]: time="2026-03-14T00:24:11.792778052Z" level=info msg="StartContainer for \"afc55445412bfa3902f5c82ffd4dbb3379b76847b86e6359c82beeb3376167e6\"" Mar 14 00:24:11.847293 systemd[1]: Started cri-containerd-afc55445412bfa3902f5c82ffd4dbb3379b76847b86e6359c82beeb3376167e6.scope - libcontainer container afc55445412bfa3902f5c82ffd4dbb3379b76847b86e6359c82beeb3376167e6. Mar 14 00:24:11.915918 containerd[1977]: time="2026-03-14T00:24:11.915768452Z" level=info msg="StartContainer for \"afc55445412bfa3902f5c82ffd4dbb3379b76847b86e6359c82beeb3376167e6\" returns successfully" Mar 14 00:24:12.348607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737429203.mount: Deactivated successfully. Mar 14 00:24:12.481573 kubelet[3185]: I0314 00:24:12.481048 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84dcd48bcb-8rmqt" podStartSLOduration=37.041235473 podStartE2EDuration="42.48102858s" podCreationTimestamp="2026-03-14 00:23:30 +0000 UTC" firstStartedPulling="2026-03-14 00:24:05.843048875 +0000 UTC m=+53.479246141" lastFinishedPulling="2026-03-14 00:24:11.282841956 +0000 UTC m=+58.919039248" observedRunningTime="2026-03-14 00:24:12.446748001 +0000 UTC m=+60.082945287" watchObservedRunningTime="2026-03-14 00:24:12.48102858 +0000 UTC m=+60.117225869" Mar 14 00:24:12.526240 systemd[1]: Started sshd@8-172.31.20.55:22-68.220.241.50:34570.service - OpenSSH per-connection server daemon (68.220.241.50:34570). Mar 14 00:24:12.894417 containerd[1977]: time="2026-03-14T00:24:12.894178224Z" level=info msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" Mar 14 00:24:13.085521 ntpd[1945]: Listen normally on 7 vxlan.calico 192.168.67.0:123 Mar 14 00:24:13.088262 ntpd[1945]: Listen normally on 8 cali38746d40327 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 7 vxlan.calico 192.168.67.0:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 8 cali38746d40327 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 9 vxlan.calico [fe80::64b8:77ff:fe21:34eb%5]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 10 calidbe72da6b95 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 11 cali36d1ced89db [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 12 calic7a07507789 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 13 cali724f469e255 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 14 calic69eff16bb7 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 15 califafc1001f62 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:24:13.091137 ntpd[1945]: 14 Mar 00:24:13 ntpd[1945]: Listen normally on 16 cali4a56141ec44 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:24:13.088319 ntpd[1945]: Listen normally on 9 vxlan.calico [fe80::64b8:77ff:fe21:34eb%5]:123 Mar 14 00:24:13.088362 ntpd[1945]: Listen normally on 10 calidbe72da6b95 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:24:13.088411 ntpd[1945]: Listen normally on 11 cali36d1ced89db [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:24:13.088467 ntpd[1945]: Listen normally on 12 calic7a07507789 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:24:13.088507 ntpd[1945]: Listen normally on 13 cali724f469e255 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:24:13.088544 ntpd[1945]: Listen normally on 14 calic69eff16bb7 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:24:13.088581 ntpd[1945]: Listen normally on 15 califafc1001f62 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:24:13.088618 ntpd[1945]: Listen normally on 16 cali4a56141ec44 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:24:13.185871 sshd[5959]: Accepted publickey for core from 68.220.241.50 port 34570 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:13.194190 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:13.214329 systemd-logind[1952]: New session 9 of user core. Mar 14 00:24:13.219636 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.106 [WARNING][5972] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0736b4c3-59d4-4880-ac34-375e0fee379d", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b", Pod:"coredns-66bc5c9577-qz4q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a56141ec44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.113 [INFO][5972] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.114 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" iface="eth0" netns="" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.115 [INFO][5972] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.115 [INFO][5972] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.351 [INFO][5983] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.355 [INFO][5983] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.355 [INFO][5983] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.381 [WARNING][5983] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.382 [INFO][5983] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.390 [INFO][5983] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:13.416152 containerd[1977]: 2026-03-14 00:24:13.405 [INFO][5972] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:13.417969 containerd[1977]: time="2026-03-14T00:24:13.416191861Z" level=info msg="TearDown network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" successfully" Mar 14 00:24:13.417969 containerd[1977]: time="2026-03-14T00:24:13.416268278Z" level=info msg="StopPodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" returns successfully" Mar 14 00:24:13.451015 kubelet[3185]: I0314 00:24:13.449569 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:13.451015 kubelet[3185]: I0314 00:24:13.450367 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:13.766057 containerd[1977]: time="2026-03-14T00:24:13.765945301Z" level=info msg="RemovePodSandbox for \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" Mar 14 00:24:13.773277 containerd[1977]: time="2026-03-14T00:24:13.773124545Z" level=info msg="Forcibly stopping sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\"" Mar 14 00:24:13.943182 containerd[1977]: time="2026-03-14T00:24:13.943135771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:13.949990 containerd[1977]: time="2026-03-14T00:24:13.949932032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:24:13.956994 containerd[1977]: time="2026-03-14T00:24:13.956945746Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:13.963228 containerd[1977]: time="2026-03-14T00:24:13.963172308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:13.976424 containerd[1977]: time="2026-03-14T00:24:13.976370569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.21567207s" Mar 14 00:24:13.979905 containerd[1977]: time="2026-03-14T00:24:13.977012778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:24:14.054243 containerd[1977]: time="2026-03-14T00:24:14.054076249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:13.951 [WARNING][6012] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0736b4c3-59d4-4880-ac34-375e0fee379d", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"08dddf4ce66f4ad48a9907754662a97ed9987536fa1bdb72a8bd73a3887a6a3b", Pod:"coredns-66bc5c9577-qz4q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a56141ec44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:13.955 [INFO][6012] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:13.955 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" iface="eth0" netns="" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:13.955 [INFO][6012] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:13.955 [INFO][6012] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.108 [INFO][6019] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.109 [INFO][6019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.109 [INFO][6019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.133 [WARNING][6019] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.133 [INFO][6019] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" HandleID="k8s-pod-network.be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--qz4q6-eth0" Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.138 [INFO][6019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:14.165223 containerd[1977]: 2026-03-14 00:24:14.152 [INFO][6012] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc" Mar 14 00:24:14.165223 containerd[1977]: time="2026-03-14T00:24:14.165110894Z" level=info msg="TearDown network for sandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" successfully" Mar 14 00:24:14.195834 containerd[1977]: time="2026-03-14T00:24:14.195779874Z" level=info msg="CreateContainer within sandbox \"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:24:14.233879 containerd[1977]: time="2026-03-14T00:24:14.233588980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:14.239953 containerd[1977]: time="2026-03-14T00:24:14.238790780Z" level=info msg="CreateContainer within sandbox \"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1\"" Mar 14 00:24:14.247695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829760419.mount: Deactivated successfully. Mar 14 00:24:14.291135 containerd[1977]: time="2026-03-14T00:24:14.291089631Z" level=info msg="StartContainer for \"98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1\"" Mar 14 00:24:14.302826 containerd[1977]: time="2026-03-14T00:24:14.300666286Z" level=info msg="RemovePodSandbox \"be143539b1f7cab0c28f7c78827289bfdf3778943e3e3196a56961f9f0cd7fdc\" returns successfully" Mar 14 00:24:14.493364 systemd[1]: run-containerd-runc-k8s.io-98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1-runc.W7Xc9E.mount: Deactivated successfully. Mar 14 00:24:14.498501 containerd[1977]: time="2026-03-14T00:24:14.494486770Z" level=info msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" Mar 14 00:24:14.511937 systemd[1]: Started cri-containerd-98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1.scope - libcontainer container 98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1. Mar 14 00:24:14.786152 containerd[1977]: time="2026-03-14T00:24:14.785714302Z" level=info msg="StartContainer for \"98d4407a43307d488c9e38439adc864d282e6f1912297a0cd3bb00548fcde8a1\" returns successfully" Mar 14 00:24:14.788678 sshd[5959]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:14.808770 systemd[1]: sshd@8-172.31.20.55:22-68.220.241.50:34570.service: Deactivated successfully. Mar 14 00:24:14.818268 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:24:14.826955 systemd-logind[1952]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:24:14.828788 systemd-logind[1952]: Removed session 9. Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.719 [WARNING][6065] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fcc26c0-21bc-4ace-9bc8-3087de8102bb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76", Pod:"csi-node-driver-q5zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724f469e255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.721 [INFO][6065] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.721 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" iface="eth0" netns="" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.721 [INFO][6065] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.721 [INFO][6065] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.818 [INFO][6079] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.820 [INFO][6079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.820 [INFO][6079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.840 [WARNING][6079] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.840 [INFO][6079] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.844 [INFO][6079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:14.861868 containerd[1977]: 2026-03-14 00:24:14.853 [INFO][6065] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:14.863983 containerd[1977]: time="2026-03-14T00:24:14.863264254Z" level=info msg="TearDown network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" successfully" Mar 14 00:24:14.863983 containerd[1977]: time="2026-03-14T00:24:14.863299292Z" level=info msg="StopPodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" returns successfully" Mar 14 00:24:14.899879 containerd[1977]: time="2026-03-14T00:24:14.898568013Z" level=info msg="RemovePodSandbox for \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" Mar 14 00:24:14.899879 containerd[1977]: time="2026-03-14T00:24:14.899628955Z" level=info msg="Forcibly stopping sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\"" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.053 [WARNING][6109] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fcc26c0-21bc-4ace-9bc8-3087de8102bb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76", Pod:"csi-node-driver-q5zzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724f469e255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.055 [INFO][6109] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.055 [INFO][6109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" iface="eth0" netns="" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.055 [INFO][6109] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.056 [INFO][6109] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.170 [INFO][6118] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.177 [INFO][6118] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.177 [INFO][6118] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.195 [WARNING][6118] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.196 [INFO][6118] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" HandleID="k8s-pod-network.87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Workload="ip--172--31--20--55-k8s-csi--node--driver--q5zzm-eth0" Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.201 [INFO][6118] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:15.211950 containerd[1977]: 2026-03-14 00:24:15.204 [INFO][6109] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879" Mar 14 00:24:15.216973 containerd[1977]: time="2026-03-14T00:24:15.212940518Z" level=info msg="TearDown network for sandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" successfully" Mar 14 00:24:15.243434 containerd[1977]: time="2026-03-14T00:24:15.243389488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:15.244171 containerd[1977]: time="2026-03-14T00:24:15.244131523Z" level=info msg="RemovePodSandbox \"87fe887d620fa219de821c068fd0edc77153c96f520433d7f00663b59b4f5879\" returns successfully" Mar 14 00:24:15.248870 containerd[1977]: time="2026-03-14T00:24:15.248837727Z" level=info msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.320 [WARNING][6133] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.320 [INFO][6133] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.320 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" iface="eth0" netns="" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.320 [INFO][6133] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.321 [INFO][6133] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.356 [INFO][6140] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.356 [INFO][6140] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.356 [INFO][6140] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.364 [WARNING][6140] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.364 [INFO][6140] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.365 [INFO][6140] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:15.370221 containerd[1977]: 2026-03-14 00:24:15.367 [INFO][6133] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.372535 containerd[1977]: time="2026-03-14T00:24:15.370275718Z" level=info msg="TearDown network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" successfully" Mar 14 00:24:15.372535 containerd[1977]: time="2026-03-14T00:24:15.370305225Z" level=info msg="StopPodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" returns successfully" Mar 14 00:24:15.372535 containerd[1977]: time="2026-03-14T00:24:15.370819506Z" level=info msg="RemovePodSandbox for \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" Mar 14 00:24:15.372535 containerd[1977]: time="2026-03-14T00:24:15.370859818Z" level=info msg="Forcibly stopping sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\"" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.444 [WARNING][6154] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" WorkloadEndpoint="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.444 [INFO][6154] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.444 [INFO][6154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" iface="eth0" netns="" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.445 [INFO][6154] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.445 [INFO][6154] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.477 [INFO][6161] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.478 [INFO][6161] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.478 [INFO][6161] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.488 [WARNING][6161] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.488 [INFO][6161] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" HandleID="k8s-pod-network.8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Workload="ip--172--31--20--55-k8s-whisker--645f7ff588--6rtfv-eth0" Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.490 [INFO][6161] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:15.498195 containerd[1977]: 2026-03-14 00:24:15.493 [INFO][6154] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc" Mar 14 00:24:15.498195 containerd[1977]: time="2026-03-14T00:24:15.496572502Z" level=info msg="TearDown network for sandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" successfully" Mar 14 00:24:15.510573 containerd[1977]: time="2026-03-14T00:24:15.509189673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:15.510573 containerd[1977]: time="2026-03-14T00:24:15.509275448Z" level=info msg="RemovePodSandbox \"8d145a9f73a8ce2e3f8655749c419b8b72ef1c67d90d3517608aabec6828accc\" returns successfully" Mar 14 00:24:15.510573 containerd[1977]: time="2026-03-14T00:24:15.509870008Z" level=info msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.571 [WARNING][6175] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"9d618a51-6829-430e-b49a-66d79e5d6bd9", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a", Pod:"calico-apiserver-84dcd48bcb-8rmqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidbe72da6b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.572 [INFO][6175] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.572 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" iface="eth0" netns="" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.572 [INFO][6175] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.572 [INFO][6175] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.626 [INFO][6183] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.627 [INFO][6183] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.627 [INFO][6183] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.635 [WARNING][6183] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.635 [INFO][6183] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.638 [INFO][6183] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:15.643930 containerd[1977]: 2026-03-14 00:24:15.640 [INFO][6175] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.646071 containerd[1977]: time="2026-03-14T00:24:15.644219885Z" level=info msg="TearDown network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" successfully" Mar 14 00:24:15.646071 containerd[1977]: time="2026-03-14T00:24:15.644246595Z" level=info msg="StopPodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" returns successfully" Mar 14 00:24:15.646071 containerd[1977]: time="2026-03-14T00:24:15.646046647Z" level=info msg="RemovePodSandbox for \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" Mar 14 00:24:15.646198 containerd[1977]: time="2026-03-14T00:24:15.646083116Z" level=info msg="Forcibly stopping sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\"" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.755 [WARNING][6198] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"9d618a51-6829-430e-b49a-66d79e5d6bd9", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"bd6c73867838a33a0692e233165a5159e330a4d05a61d4bf4a72faaab0e6ba7a", Pod:"calico-apiserver-84dcd48bcb-8rmqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidbe72da6b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.755 [INFO][6198] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.755 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" iface="eth0" netns="" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.755 [INFO][6198] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.755 [INFO][6198] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.825 [INFO][6205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.826 [INFO][6205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.826 [INFO][6205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.843 [WARNING][6205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.843 [INFO][6205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" HandleID="k8s-pod-network.14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--8rmqt-eth0" Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.845 [INFO][6205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:15.858832 containerd[1977]: 2026-03-14 00:24:15.852 [INFO][6198] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6" Mar 14 00:24:15.858832 containerd[1977]: time="2026-03-14T00:24:15.857645110Z" level=info msg="TearDown network for sandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" successfully" Mar 14 00:24:15.865604 containerd[1977]: time="2026-03-14T00:24:15.865565226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:15.865946 containerd[1977]: time="2026-03-14T00:24:15.865897335Z" level=info msg="RemovePodSandbox \"14788ebb56965d1507ba03971aa801c0c3b72e5a85eff695f4d376adce2ffcf6\" returns successfully" Mar 14 00:24:15.870257 containerd[1977]: time="2026-03-14T00:24:15.868705057Z" level=info msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:15.951 [WARNING][6220] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0", GenerateName:"calico-kube-controllers-d8b9cffb-", Namespace:"calico-system", SelfLink:"", UID:"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8b9cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01", Pod:"calico-kube-controllers-d8b9cffb-kn7jv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic69eff16bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:15.952 [INFO][6220] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:15.952 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" iface="eth0" netns="" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:15.952 [INFO][6220] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:15.952 [INFO][6220] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.005 [INFO][6227] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.006 [INFO][6227] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.006 [INFO][6227] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.020 [WARNING][6227] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.020 [INFO][6227] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.022 [INFO][6227] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.032998 containerd[1977]: 2026-03-14 00:24:16.028 [INFO][6220] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.035984 containerd[1977]: time="2026-03-14T00:24:16.033014928Z" level=info msg="TearDown network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" successfully" Mar 14 00:24:16.035984 containerd[1977]: time="2026-03-14T00:24:16.033045875Z" level=info msg="StopPodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" returns successfully" Mar 14 00:24:16.035984 containerd[1977]: time="2026-03-14T00:24:16.033577036Z" level=info msg="RemovePodSandbox for \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" Mar 14 00:24:16.035984 containerd[1977]: time="2026-03-14T00:24:16.033611967Z" level=info msg="Forcibly stopping sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\"" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.268 [WARNING][6241] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0", GenerateName:"calico-kube-controllers-d8b9cffb-", Namespace:"calico-system", SelfLink:"", UID:"5d5616e1-d343-4d2f-a167-fc5c7ebcfeec", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8b9cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01", Pod:"calico-kube-controllers-d8b9cffb-kn7jv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic69eff16bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.270 [INFO][6241] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.270 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" iface="eth0" netns="" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.270 [INFO][6241] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.271 [INFO][6241] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.347 [INFO][6249] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.348 [INFO][6249] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.348 [INFO][6249] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.365 [WARNING][6249] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.365 [INFO][6249] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" HandleID="k8s-pod-network.7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Workload="ip--172--31--20--55-k8s-calico--kube--controllers--d8b9cffb--kn7jv-eth0" Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.367 [INFO][6249] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.377764 containerd[1977]: 2026-03-14 00:24:16.373 [INFO][6241] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319" Mar 14 00:24:16.381747 containerd[1977]: time="2026-03-14T00:24:16.379595416Z" level=info msg="TearDown network for sandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" successfully" Mar 14 00:24:16.492886 containerd[1977]: time="2026-03-14T00:24:16.492841730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:16.493018 containerd[1977]: time="2026-03-14T00:24:16.492919390Z" level=info msg="RemovePodSandbox \"7a4061c1f752ba0e0f117967e20705b9dda9dc958cd4f360efa7773600284319\" returns successfully" Mar 14 00:24:16.503287 containerd[1977]: time="2026-03-14T00:24:16.502839924Z" level=info msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.666 [WARNING][6269] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"599d0c23-2c01-40d7-91a9-4eddcc457e9d", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e", Pod:"coredns-66bc5c9577-n76gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a07507789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.666 [INFO][6269] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.666 [INFO][6269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" iface="eth0" netns="" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.666 [INFO][6269] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.666 [INFO][6269] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.776 [INFO][6276] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.776 [INFO][6276] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.777 [INFO][6276] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.788 [WARNING][6276] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.789 [INFO][6276] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.792 [INFO][6276] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:16.802830 containerd[1977]: 2026-03-14 00:24:16.796 [INFO][6269] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:16.805790 containerd[1977]: time="2026-03-14T00:24:16.802875436Z" level=info msg="TearDown network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" successfully" Mar 14 00:24:16.805790 containerd[1977]: time="2026-03-14T00:24:16.802908244Z" level=info msg="StopPodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" returns successfully" Mar 14 00:24:16.847719 containerd[1977]: time="2026-03-14T00:24:16.847671486Z" level=info msg="RemovePodSandbox for \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" Mar 14 00:24:16.847719 containerd[1977]: time="2026-03-14T00:24:16.847709339Z" level=info msg="Forcibly stopping sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\"" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.927 [WARNING][6291] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"599d0c23-2c01-40d7-91a9-4eddcc457e9d", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"7d74ebf0644da7cac0d3f329da894b2df1f10d7bef4e3fbfae3d6f6a025b306e", Pod:"coredns-66bc5c9577-n76gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7a07507789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.927 [INFO][6291] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.927 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" iface="eth0" netns="" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.927 [INFO][6291] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.927 [INFO][6291] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.986 [INFO][6299] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.988 [INFO][6299] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:16.988 [INFO][6299] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:17.005 [WARNING][6299] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:17.006 [INFO][6299] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" HandleID="k8s-pod-network.fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Workload="ip--172--31--20--55-k8s-coredns--66bc5c9577--n76gs-eth0" Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:17.008 [INFO][6299] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.015235 containerd[1977]: 2026-03-14 00:24:17.012 [INFO][6291] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd" Mar 14 00:24:17.016597 containerd[1977]: time="2026-03-14T00:24:17.015403797Z" level=info msg="TearDown network for sandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" successfully" Mar 14 00:24:17.020854 containerd[1977]: time="2026-03-14T00:24:17.020473448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:17.020854 containerd[1977]: time="2026-03-14T00:24:17.020554874Z" level=info msg="RemovePodSandbox \"fa12e6db880b5c9a55b3f51cee8e77229082221bd348914052c04a3f820f55fd\" returns successfully" Mar 14 00:24:17.021236 containerd[1977]: time="2026-03-14T00:24:17.021206769Z" level=info msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.093 [WARNING][6313] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4d4bc3f-388e-42d8-aa7a-f80349da127e", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888", Pod:"goldmane-cccfbd5cf-tmhdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califafc1001f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.093 [INFO][6313] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.093 [INFO][6313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" iface="eth0" netns="" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.093 [INFO][6313] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.093 [INFO][6313] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.152 [INFO][6320] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.152 [INFO][6320] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.152 [INFO][6320] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.166 [WARNING][6320] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.166 [INFO][6320] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.168 [INFO][6320] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.176882 containerd[1977]: 2026-03-14 00:24:17.173 [INFO][6313] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.178602 containerd[1977]: time="2026-03-14T00:24:17.176931022Z" level=info msg="TearDown network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" successfully" Mar 14 00:24:17.178602 containerd[1977]: time="2026-03-14T00:24:17.176959881Z" level=info msg="StopPodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" returns successfully" Mar 14 00:24:17.186290 containerd[1977]: time="2026-03-14T00:24:17.185691545Z" level=info msg="RemovePodSandbox for \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" Mar 14 00:24:17.186290 containerd[1977]: time="2026-03-14T00:24:17.185735928Z" level=info msg="Forcibly stopping sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\"" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.279 [WARNING][6334] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4d4bc3f-388e-42d8-aa7a-f80349da127e", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888", Pod:"goldmane-cccfbd5cf-tmhdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califafc1001f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.279 [INFO][6334] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.279 [INFO][6334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" iface="eth0" netns="" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.279 [INFO][6334] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.279 [INFO][6334] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.344 [INFO][6341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.344 [INFO][6341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.345 [INFO][6341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.355 [WARNING][6341] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.355 [INFO][6341] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" HandleID="k8s-pod-network.11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Workload="ip--172--31--20--55-k8s-goldmane--cccfbd5cf--tmhdn-eth0" Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.358 [INFO][6341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.364160 containerd[1977]: 2026-03-14 00:24:17.360 [INFO][6334] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae" Mar 14 00:24:17.365614 containerd[1977]: time="2026-03-14T00:24:17.364315480Z" level=info msg="TearDown network for sandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" successfully" Mar 14 00:24:17.393826 containerd[1977]: time="2026-03-14T00:24:17.392344745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:17.393826 containerd[1977]: time="2026-03-14T00:24:17.392533753Z" level=info msg="RemovePodSandbox \"11d2f4900fc0c494ea5ab684a92f77bfcb7d565b92557c3559735d606a829bae\" returns successfully" Mar 14 00:24:17.431302 containerd[1977]: time="2026-03-14T00:24:17.430907760Z" level=info msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.531 [WARNING][6355] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"29b0b1cc-979c-471a-96de-036f619bea96", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a", Pod:"calico-apiserver-84dcd48bcb-ntpm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36d1ced89db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.531 [INFO][6355] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.531 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" iface="eth0" netns="" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.531 [INFO][6355] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.531 [INFO][6355] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.605 [INFO][6363] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.607 [INFO][6363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.608 [INFO][6363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.624 [WARNING][6363] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.624 [INFO][6363] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.627 [INFO][6363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.637305 containerd[1977]: 2026-03-14 00:24:17.631 [INFO][6355] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.637866 containerd[1977]: time="2026-03-14T00:24:17.637839918Z" level=info msg="TearDown network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" successfully" Mar 14 00:24:17.638558 containerd[1977]: time="2026-03-14T00:24:17.637975737Z" level=info msg="StopPodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" returns successfully" Mar 14 00:24:17.651916 containerd[1977]: time="2026-03-14T00:24:17.651877927Z" level=info msg="RemovePodSandbox for \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" Mar 14 00:24:17.652262 containerd[1977]: time="2026-03-14T00:24:17.652227562Z" level=info msg="Forcibly stopping sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\"" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.720 [WARNING][6377] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0", GenerateName:"calico-apiserver-84dcd48bcb-", Namespace:"calico-system", SelfLink:"", UID:"29b0b1cc-979c-471a-96de-036f619bea96", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dcd48bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-55", ContainerID:"4238e08443df9229e1c28c5d2cee4faa1e6458479494887b830840986e434e2a", Pod:"calico-apiserver-84dcd48bcb-ntpm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36d1ced89db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.721 [INFO][6377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.721 [INFO][6377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" iface="eth0" netns="" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.721 [INFO][6377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.721 [INFO][6377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.770 [INFO][6384] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.771 [INFO][6384] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.771 [INFO][6384] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.780 [WARNING][6384] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.780 [INFO][6384] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" HandleID="k8s-pod-network.868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Workload="ip--172--31--20--55-k8s-calico--apiserver--84dcd48bcb--ntpm7-eth0" Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.784 [INFO][6384] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:24:17.793038 containerd[1977]: 2026-03-14 00:24:17.788 [INFO][6377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13" Mar 14 00:24:17.794691 containerd[1977]: time="2026-03-14T00:24:17.793643045Z" level=info msg="TearDown network for sandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" successfully" Mar 14 00:24:17.807389 containerd[1977]: time="2026-03-14T00:24:17.806964430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:24:17.807574 containerd[1977]: time="2026-03-14T00:24:17.807107171Z" level=info msg="RemovePodSandbox \"868b4c438b82547ac76cabaffe6eb3aba9f7376975cd96c98f8da934dfcc0a13\" returns successfully" Mar 14 00:24:18.417748 containerd[1977]: time="2026-03-14T00:24:18.417099320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:18.418835 containerd[1977]: time="2026-03-14T00:24:18.418698597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:24:18.497644 containerd[1977]: time="2026-03-14T00:24:18.497596007Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:18.500994 containerd[1977]: time="2026-03-14T00:24:18.500915406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:18.502404 containerd[1977]: time="2026-03-14T00:24:18.501803396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.447673103s" Mar 14 00:24:18.511679 containerd[1977]: time="2026-03-14T00:24:18.510982673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:24:18.588096 containerd[1977]: time="2026-03-14T00:24:18.588046482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:24:18.846652 containerd[1977]: time="2026-03-14T00:24:18.846411067Z" level=info msg="CreateContainer within sandbox \"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:24:18.905311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571473922.mount: Deactivated successfully. Mar 14 00:24:18.911709 containerd[1977]: time="2026-03-14T00:24:18.911663790Z" level=info msg="CreateContainer within sandbox \"d64032e68dd1bf3131d562dcc97ebcf5e211af56be32e46a9acb85b85246ec01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c\"" Mar 14 00:24:18.914026 containerd[1977]: time="2026-03-14T00:24:18.913991453Z" level=info msg="StartContainer for \"4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c\"" Mar 14 00:24:19.365040 systemd[1]: Started cri-containerd-4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c.scope - libcontainer container 4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c. Mar 14 00:24:19.450076 containerd[1977]: time="2026-03-14T00:24:19.448923461Z" level=info msg="StartContainer for \"4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c\" returns successfully" Mar 14 00:24:19.913679 systemd[1]: Started sshd@9-172.31.20.55:22-68.220.241.50:34572.service - OpenSSH per-connection server daemon (68.220.241.50:34572). Mar 14 00:24:20.492497 sshd[6445]: Accepted publickey for core from 68.220.241.50 port 34572 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:20.498556 sshd[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:20.507730 systemd-logind[1952]: New session 10 of user core. Mar 14 00:24:20.512049 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:24:20.548831 kubelet[3185]: I0314 00:24:20.539796 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84dcd48bcb-ntpm7" podStartSLOduration=44.932822782 podStartE2EDuration="50.534525599s" podCreationTimestamp="2026-03-14 00:23:30 +0000 UTC" firstStartedPulling="2026-03-14 00:24:06.158707658 +0000 UTC m=+53.794904925" lastFinishedPulling="2026-03-14 00:24:11.760410465 +0000 UTC m=+59.396607742" observedRunningTime="2026-03-14 00:24:12.483470883 +0000 UTC m=+60.119668170" watchObservedRunningTime="2026-03-14 00:24:20.534525599 +0000 UTC m=+68.170722886" Mar 14 00:24:20.553735 kubelet[3185]: I0314 00:24:20.553687 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d8b9cffb-kn7jv" podStartSLOduration=39.141202509 podStartE2EDuration="49.553670242s" podCreationTimestamp="2026-03-14 00:23:31 +0000 UTC" firstStartedPulling="2026-03-14 00:24:08.127382712 +0000 UTC m=+55.763579995" lastFinishedPulling="2026-03-14 00:24:18.539850462 +0000 UTC m=+66.176047728" observedRunningTime="2026-03-14 00:24:20.515871173 +0000 UTC m=+68.152068475" watchObservedRunningTime="2026-03-14 00:24:20.553670242 +0000 UTC m=+68.189867529" Mar 14 00:24:21.610460 systemd[1]: run-containerd-runc-k8s.io-4fee43b9c570dcb4f236d0e74661d35e2035f06971130b9ac0f83cb19dd5625c-runc.Pzcvc9.mount: Deactivated successfully. Mar 14 00:24:22.119660 sshd[6445]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:22.124874 systemd-logind[1952]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:24:22.125639 systemd[1]: sshd@9-172.31.20.55:22-68.220.241.50:34572.service: Deactivated successfully. Mar 14 00:24:22.129229 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:24:22.130322 systemd-logind[1952]: Removed session 10. Mar 14 00:24:23.144570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544509222.mount: Deactivated successfully. Mar 14 00:24:24.123998 containerd[1977]: time="2026-03-14T00:24:24.123934935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:24.125840 containerd[1977]: time="2026-03-14T00:24:24.125749960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:24:24.161914 containerd[1977]: time="2026-03-14T00:24:24.160893047Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:24.175314 containerd[1977]: time="2026-03-14T00:24:24.175233296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:24.177464 containerd[1977]: time="2026-03-14T00:24:24.177423737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.589331633s" Mar 14 00:24:24.177720 containerd[1977]: time="2026-03-14T00:24:24.177621587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:24:24.226455 containerd[1977]: time="2026-03-14T00:24:24.225940036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:24:24.261829 containerd[1977]: time="2026-03-14T00:24:24.260884931Z" level=info msg="CreateContainer within sandbox \"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:24:24.308755 containerd[1977]: time="2026-03-14T00:24:24.308578259Z" level=info msg="CreateContainer within sandbox \"5c8374b3c8514f88acb2c2eae1be51f8176bab0951d2c42860c29a9e1494a888\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03\"" Mar 14 00:24:24.311032 containerd[1977]: time="2026-03-14T00:24:24.310975984Z" level=info msg="StartContainer for \"8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03\"" Mar 14 00:24:24.530431 systemd[1]: Started cri-containerd-8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03.scope - libcontainer container 8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03. Mar 14 00:24:24.614393 containerd[1977]: time="2026-03-14T00:24:24.614293224Z" level=info msg="StartContainer for \"8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03\" returns successfully" Mar 14 00:24:26.112718 containerd[1977]: time="2026-03-14T00:24:26.112664049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.114923 containerd[1977]: time="2026-03-14T00:24:26.114861982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:24:26.117077 containerd[1977]: time="2026-03-14T00:24:26.117036127Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.149869 containerd[1977]: time="2026-03-14T00:24:26.121751933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.149869 containerd[1977]: time="2026-03-14T00:24:26.122557047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.896549978s" Mar 14 00:24:26.149869 containerd[1977]: time="2026-03-14T00:24:26.122596508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:24:26.149869 containerd[1977]: time="2026-03-14T00:24:26.134729921Z" level=info msg="CreateContainer within sandbox \"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:24:26.182093 containerd[1977]: time="2026-03-14T00:24:26.182037565Z" level=info msg="CreateContainer within sandbox \"dbbd47fdd6480e6f3d4e5afc8765967a81af8eff254cc56562db0054fe238d76\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d93ed990eed1f2a9250cbbe75d45c261fdc3c889bd6ca60f3650af7c0f446e09\"" Mar 14 00:24:26.184726 containerd[1977]: time="2026-03-14T00:24:26.184426489Z" level=info msg="StartContainer for \"d93ed990eed1f2a9250cbbe75d45c261fdc3c889bd6ca60f3650af7c0f446e09\"" Mar 14 00:24:26.244025 systemd[1]: Started cri-containerd-d93ed990eed1f2a9250cbbe75d45c261fdc3c889bd6ca60f3650af7c0f446e09.scope - libcontainer container d93ed990eed1f2a9250cbbe75d45c261fdc3c889bd6ca60f3650af7c0f446e09. Mar 14 00:24:26.289055 containerd[1977]: time="2026-03-14T00:24:26.288699831Z" level=info msg="StartContainer for \"d93ed990eed1f2a9250cbbe75d45c261fdc3c889bd6ca60f3650af7c0f446e09\" returns successfully" Mar 14 00:24:26.593555 kubelet[3185]: I0314 00:24:26.591021 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-tmhdn" podStartSLOduration=41.6771905 podStartE2EDuration="56.558946351s" podCreationTimestamp="2026-03-14 00:23:30 +0000 UTC" firstStartedPulling="2026-03-14 00:24:09.325790548 +0000 UTC m=+56.961987812" lastFinishedPulling="2026-03-14 00:24:24.207546381 +0000 UTC m=+71.843743663" observedRunningTime="2026-03-14 00:24:25.668872599 +0000 UTC m=+73.305069887" watchObservedRunningTime="2026-03-14 00:24:26.558946351 +0000 UTC m=+74.195143637" Mar 14 00:24:27.009560 kubelet[3185]: I0314 00:24:27.009513 3185 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:24:27.025376 kubelet[3185]: I0314 00:24:27.025329 3185 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:24:27.232625 systemd[1]: Started sshd@10-172.31.20.55:22-68.220.241.50:47936.service - OpenSSH per-connection server daemon (68.220.241.50:47936). Mar 14 00:24:27.537063 systemd[1]: run-containerd-runc-k8s.io-8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03-runc.LMYg46.mount: Deactivated successfully. Mar 14 00:24:27.843999 sshd[6651]: Accepted publickey for core from 68.220.241.50 port 47936 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:27.849494 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:27.857123 systemd-logind[1952]: New session 11 of user core. Mar 14 00:24:27.860026 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:24:28.861167 systemd[1]: run-containerd-runc-k8s.io-a6cbaa8499e84e1fce9039781b67802d959c375323ff3340b3adf67af3ba2be2-runc.8eIbPr.mount: Deactivated successfully. Mar 14 00:24:28.998328 sshd[6651]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:29.004226 systemd[1]: sshd@10-172.31.20.55:22-68.220.241.50:47936.service: Deactivated successfully. Mar 14 00:24:29.007468 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:24:29.008432 systemd-logind[1952]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:24:29.010326 systemd-logind[1952]: Removed session 11. Mar 14 00:24:29.098159 systemd[1]: Started sshd@11-172.31.20.55:22-68.220.241.50:47948.service - OpenSSH per-connection server daemon (68.220.241.50:47948). Mar 14 00:24:29.360108 kubelet[3185]: I0314 00:24:29.358278 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q5zzm" podStartSLOduration=39.272588846 podStartE2EDuration="58.358256773s" podCreationTimestamp="2026-03-14 00:23:31 +0000 UTC" firstStartedPulling="2026-03-14 00:24:07.038306875 +0000 UTC m=+54.674504140" lastFinishedPulling="2026-03-14 00:24:26.1239748 +0000 UTC m=+73.760172067" observedRunningTime="2026-03-14 00:24:26.594234672 +0000 UTC m=+74.230431943" watchObservedRunningTime="2026-03-14 00:24:29.358256773 +0000 UTC m=+76.994454059" Mar 14 00:24:29.637553 sshd[6707]: Accepted publickey for core from 68.220.241.50 port 47948 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:29.639362 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:29.644300 systemd-logind[1952]: New session 12 of user core. Mar 14 00:24:29.649014 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:24:30.299435 sshd[6707]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:30.304726 systemd-logind[1952]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:24:30.305892 systemd[1]: sshd@11-172.31.20.55:22-68.220.241.50:47948.service: Deactivated successfully. Mar 14 00:24:30.308885 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:24:30.309913 systemd-logind[1952]: Removed session 12. Mar 14 00:24:30.394189 systemd[1]: Started sshd@12-172.31.20.55:22-68.220.241.50:47964.service - OpenSSH per-connection server daemon (68.220.241.50:47964). Mar 14 00:24:30.944985 sshd[6725]: Accepted publickey for core from 68.220.241.50 port 47964 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:30.946706 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:30.952086 systemd-logind[1952]: New session 13 of user core. Mar 14 00:24:30.958054 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:24:31.475907 sshd[6725]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:31.489458 systemd-logind[1952]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:24:31.489672 systemd[1]: sshd@12-172.31.20.55:22-68.220.241.50:47964.service: Deactivated successfully. Mar 14 00:24:31.492849 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:24:31.493912 systemd-logind[1952]: Removed session 13. Mar 14 00:24:36.568152 systemd[1]: Started sshd@13-172.31.20.55:22-68.220.241.50:50288.service - OpenSSH per-connection server daemon (68.220.241.50:50288). Mar 14 00:24:37.166019 sshd[6749]: Accepted publickey for core from 68.220.241.50 port 50288 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:37.169405 sshd[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:37.177210 systemd-logind[1952]: New session 14 of user core. Mar 14 00:24:37.181085 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:24:37.919982 sshd[6749]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:37.932916 systemd[1]: sshd@13-172.31.20.55:22-68.220.241.50:50288.service: Deactivated successfully. Mar 14 00:24:37.936328 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:24:37.937398 systemd-logind[1952]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:24:37.938608 systemd-logind[1952]: Removed session 14. Mar 14 00:24:38.012188 systemd[1]: Started sshd@14-172.31.20.55:22-68.220.241.50:50304.service - OpenSSH per-connection server daemon (68.220.241.50:50304). Mar 14 00:24:38.527495 sshd[6762]: Accepted publickey for core from 68.220.241.50 port 50304 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:38.529189 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:38.536458 systemd-logind[1952]: New session 15 of user core. Mar 14 00:24:38.539985 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:24:40.158478 kubelet[3185]: I0314 00:24:40.158420 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:42.060701 sshd[6762]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:42.069163 systemd-logind[1952]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:24:42.070229 systemd[1]: sshd@14-172.31.20.55:22-68.220.241.50:50304.service: Deactivated successfully. Mar 14 00:24:42.072946 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:24:42.075175 systemd-logind[1952]: Removed session 15. Mar 14 00:24:42.159527 systemd[1]: Started sshd@15-172.31.20.55:22-68.220.241.50:55048.service - OpenSSH per-connection server daemon (68.220.241.50:55048). Mar 14 00:24:42.748827 sshd[6776]: Accepted publickey for core from 68.220.241.50 port 55048 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:42.751370 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:42.757490 systemd-logind[1952]: New session 16 of user core. Mar 14 00:24:42.761017 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:24:44.011934 sshd[6776]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:44.016796 systemd-logind[1952]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:24:44.018531 systemd[1]: sshd@15-172.31.20.55:22-68.220.241.50:55048.service: Deactivated successfully. Mar 14 00:24:44.021697 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:24:44.024534 systemd-logind[1952]: Removed session 16. Mar 14 00:24:44.107533 systemd[1]: Started sshd@16-172.31.20.55:22-68.220.241.50:55058.service - OpenSSH per-connection server daemon (68.220.241.50:55058). Mar 14 00:24:44.640129 sshd[6814]: Accepted publickey for core from 68.220.241.50 port 55058 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:44.641875 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:44.647722 systemd-logind[1952]: New session 17 of user core. Mar 14 00:24:44.655059 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:24:45.932315 sshd[6814]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:45.947885 systemd[1]: sshd@16-172.31.20.55:22-68.220.241.50:55058.service: Deactivated successfully. Mar 14 00:24:45.950646 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:24:45.952437 systemd-logind[1952]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:24:45.954331 systemd-logind[1952]: Removed session 17. Mar 14 00:24:46.013258 systemd[1]: Started sshd@17-172.31.20.55:22-68.220.241.50:55074.service - OpenSSH per-connection server daemon (68.220.241.50:55074). Mar 14 00:24:46.583845 sshd[6827]: Accepted publickey for core from 68.220.241.50 port 55074 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:46.596484 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:46.605736 systemd-logind[1952]: New session 18 of user core. Mar 14 00:24:46.612095 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:24:47.311648 sshd[6827]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:47.320819 systemd[1]: sshd@17-172.31.20.55:22-68.220.241.50:55074.service: Deactivated successfully. Mar 14 00:24:47.323415 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:24:47.325721 systemd-logind[1952]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:24:47.326929 systemd-logind[1952]: Removed session 18. Mar 14 00:24:51.786032 kubelet[3185]: I0314 00:24:51.785491 3185 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:52.415260 systemd[1]: Started sshd@18-172.31.20.55:22-68.220.241.50:49000.service - OpenSSH per-connection server daemon (68.220.241.50:49000). Mar 14 00:24:53.020644 sshd[6870]: Accepted publickey for core from 68.220.241.50 port 49000 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:53.023789 sshd[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:53.029918 systemd-logind[1952]: New session 19 of user core. Mar 14 00:24:53.037034 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:24:53.706787 sshd[6870]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:53.710303 systemd[1]: sshd@18-172.31.20.55:22-68.220.241.50:49000.service: Deactivated successfully. Mar 14 00:24:53.712685 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:24:53.715247 systemd-logind[1952]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:24:53.716479 systemd-logind[1952]: Removed session 19. Mar 14 00:24:58.797158 systemd[1]: Started sshd@19-172.31.20.55:22-68.220.241.50:49002.service - OpenSSH per-connection server daemon (68.220.241.50:49002). Mar 14 00:24:59.419967 sshd[6904]: Accepted publickey for core from 68.220.241.50 port 49002 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:59.423989 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:59.435178 systemd-logind[1952]: New session 20 of user core. Mar 14 00:24:59.444694 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:25:01.064151 sshd[6904]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:01.073163 systemd[1]: sshd@19-172.31.20.55:22-68.220.241.50:49002.service: Deactivated successfully. Mar 14 00:25:01.076563 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:25:01.077517 systemd-logind[1952]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:25:01.078672 systemd-logind[1952]: Removed session 20. Mar 14 00:25:06.165094 systemd[1]: Started sshd@20-172.31.20.55:22-68.220.241.50:40252.service - OpenSSH per-connection server daemon (68.220.241.50:40252). Mar 14 00:25:06.706082 sshd[6957]: Accepted publickey for core from 68.220.241.50 port 40252 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:06.707923 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:06.713297 systemd-logind[1952]: New session 21 of user core. Mar 14 00:25:06.719208 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:25:07.164521 sshd[6957]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:07.170518 systemd[1]: sshd@20-172.31.20.55:22-68.220.241.50:40252.service: Deactivated successfully. Mar 14 00:25:07.173669 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:25:07.174527 systemd-logind[1952]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:25:07.176000 systemd-logind[1952]: Removed session 21. Mar 14 00:25:12.251115 systemd[1]: Started sshd@21-172.31.20.55:22-68.220.241.50:38208.service - OpenSSH per-connection server daemon (68.220.241.50:38208). Mar 14 00:25:12.815889 sshd[6970]: Accepted publickey for core from 68.220.241.50 port 38208 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:12.816800 sshd[6970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:12.821357 systemd-logind[1952]: New session 22 of user core. Mar 14 00:25:12.828012 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:25:13.350389 sshd[6970]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:13.359341 systemd[1]: sshd@21-172.31.20.55:22-68.220.241.50:38208.service: Deactivated successfully. Mar 14 00:25:13.362034 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:25:13.364247 systemd-logind[1952]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:25:13.366003 systemd-logind[1952]: Removed session 22. Mar 14 00:25:15.829855 systemd[1]: run-containerd-runc-k8s.io-8f5f77dd85aab217391977e7c2587390b71dc6052a5b409f72fbc29736a4bb03-runc.hIjioN.mount: Deactivated successfully. Mar 14 00:25:28.231202 systemd[1]: cri-containerd-d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5.scope: Deactivated successfully. Mar 14 00:25:28.231935 systemd[1]: cri-containerd-d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5.scope: Consumed 7.044s CPU time. Mar 14 00:25:28.451209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5-rootfs.mount: Deactivated successfully. Mar 14 00:25:28.508367 containerd[1977]: time="2026-03-14T00:25:28.487136078Z" level=info msg="shim disconnected" id=d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5 namespace=k8s.io Mar 14 00:25:28.508367 containerd[1977]: time="2026-03-14T00:25:28.508280453Z" level=warning msg="cleaning up after shim disconnected" id=d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5 namespace=k8s.io Mar 14 00:25:28.508367 containerd[1977]: time="2026-03-14T00:25:28.508305249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:29.027897 kubelet[3185]: I0314 00:25:29.027848 3185 scope.go:117] "RemoveContainer" containerID="d9c97a4c19da9e11b25d6af565efe4349e655e2e4dea0a0cf506a218a00aced5" Mar 14 00:25:29.104307 containerd[1977]: time="2026-03-14T00:25:29.104247172Z" level=info msg="CreateContainer within sandbox \"e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:25:29.204416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825831726.mount: Deactivated successfully. Mar 14 00:25:29.223581 containerd[1977]: time="2026-03-14T00:25:29.223523780Z" level=info msg="CreateContainer within sandbox \"e88913078d705fd8417d829603d10f1af3997aaea71603770b299f308eaa170d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b60f4e666fa1f3d32d9227ab5c7a9835d1689c8def8c60d9a61d2f645d0246d7\"" Mar 14 00:25:29.228122 containerd[1977]: time="2026-03-14T00:25:29.228082047Z" level=info msg="StartContainer for \"b60f4e666fa1f3d32d9227ab5c7a9835d1689c8def8c60d9a61d2f645d0246d7\"" Mar 14 00:25:29.239314 systemd[1]: cri-containerd-82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61.scope: Deactivated successfully. Mar 14 00:25:29.239636 systemd[1]: cri-containerd-82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61.scope: Consumed 5.087s CPU time, 17.1M memory peak, 0B memory swap peak. Mar 14 00:25:29.316419 systemd[1]: Started cri-containerd-b60f4e666fa1f3d32d9227ab5c7a9835d1689c8def8c60d9a61d2f645d0246d7.scope - libcontainer container b60f4e666fa1f3d32d9227ab5c7a9835d1689c8def8c60d9a61d2f645d0246d7. Mar 14 00:25:29.329202 containerd[1977]: time="2026-03-14T00:25:29.329128966Z" level=info msg="shim disconnected" id=82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61 namespace=k8s.io Mar 14 00:25:29.329202 containerd[1977]: time="2026-03-14T00:25:29.329195290Z" level=warning msg="cleaning up after shim disconnected" id=82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61 namespace=k8s.io Mar 14 00:25:29.329202 containerd[1977]: time="2026-03-14T00:25:29.329207391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:29.389525 containerd[1977]: time="2026-03-14T00:25:29.389476496Z" level=info msg="StartContainer for \"b60f4e666fa1f3d32d9227ab5c7a9835d1689c8def8c60d9a61d2f645d0246d7\" returns successfully" Mar 14 00:25:29.829816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61-rootfs.mount: Deactivated successfully. Mar 14 00:25:30.018129 kubelet[3185]: I0314 00:25:30.018080 3185 scope.go:117] "RemoveContainer" containerID="82ba651cfcb08ba19fa22ab7d031aa430f79ddeb3afe00bf8600f099975a6c61" Mar 14 00:25:30.040752 containerd[1977]: time="2026-03-14T00:25:30.040704835Z" level=info msg="CreateContainer within sandbox \"22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:25:30.074619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135286403.mount: Deactivated successfully. Mar 14 00:25:30.080962 containerd[1977]: time="2026-03-14T00:25:30.080796320Z" level=info msg="CreateContainer within sandbox \"22110bec9a4124714d524aa22e91a51d3180e345e856edf8fae950dfb4bb08a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"566bd117412908daf76b2cac4e3489ddeb6cd4bae99fe77be7282a5782d3bd35\"" Mar 14 00:25:30.081693 containerd[1977]: time="2026-03-14T00:25:30.081647934Z" level=info msg="StartContainer for \"566bd117412908daf76b2cac4e3489ddeb6cd4bae99fe77be7282a5782d3bd35\"" Mar 14 00:25:30.132055 systemd[1]: Started cri-containerd-566bd117412908daf76b2cac4e3489ddeb6cd4bae99fe77be7282a5782d3bd35.scope - libcontainer container 566bd117412908daf76b2cac4e3489ddeb6cd4bae99fe77be7282a5782d3bd35. Mar 14 00:25:30.194010 containerd[1977]: time="2026-03-14T00:25:30.193955572Z" level=info msg="StartContainer for \"566bd117412908daf76b2cac4e3489ddeb6cd4bae99fe77be7282a5782d3bd35\" returns successfully" Mar 14 00:25:33.328323 systemd[1]: cri-containerd-f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b.scope: Deactivated successfully. Mar 14 00:25:33.328623 systemd[1]: cri-containerd-f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b.scope: Consumed 3.003s CPU time, 13.5M memory peak, 0B memory swap peak. Mar 14 00:25:33.356368 containerd[1977]: time="2026-03-14T00:25:33.356294296Z" level=info msg="shim disconnected" id=f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b namespace=k8s.io Mar 14 00:25:33.356368 containerd[1977]: time="2026-03-14T00:25:33.356366491Z" level=warning msg="cleaning up after shim disconnected" id=f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b namespace=k8s.io Mar 14 00:25:33.356895 containerd[1977]: time="2026-03-14T00:25:33.356377204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:33.362783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b-rootfs.mount: Deactivated successfully. Mar 14 00:25:34.032202 kubelet[3185]: I0314 00:25:34.032168 3185 scope.go:117] "RemoveContainer" containerID="f26aa3c2c2040a05810c24f0f5942ec60471599fd3e0145d7e6348c4024e993b" Mar 14 00:25:34.034451 containerd[1977]: time="2026-03-14T00:25:34.034411980Z" level=info msg="CreateContainer within sandbox \"def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:25:34.102951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661958917.mount: Deactivated successfully. Mar 14 00:25:34.108567 containerd[1977]: time="2026-03-14T00:25:34.108517277Z" level=info msg="CreateContainer within sandbox \"def220a0ae3f41da88ba21e2366c3e0bd1f0f035a6b7ef3ec3b7601a270a3039\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a0b8ed7aec63c10dfc50a424a56e0e8976f70e9d74528323bcafccfefcaad9d3\"" Mar 14 00:25:34.109144 containerd[1977]: time="2026-03-14T00:25:34.109117702Z" level=info msg="StartContainer for \"a0b8ed7aec63c10dfc50a424a56e0e8976f70e9d74528323bcafccfefcaad9d3\"" Mar 14 00:25:34.147074 systemd[1]: Started cri-containerd-a0b8ed7aec63c10dfc50a424a56e0e8976f70e9d74528323bcafccfefcaad9d3.scope - libcontainer container a0b8ed7aec63c10dfc50a424a56e0e8976f70e9d74528323bcafccfefcaad9d3. Mar 14 00:25:34.197830 containerd[1977]: time="2026-03-14T00:25:34.197334163Z" level=info msg="StartContainer for \"a0b8ed7aec63c10dfc50a424a56e0e8976f70e9d74528323bcafccfefcaad9d3\" returns successfully" Mar 14 00:25:34.614504 kubelet[3185]: E0314 00:25:34.614436 3185 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"