Sep 12 17:34:42.205190 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:34:42.205229 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:42.205248 kernel: BIOS-provided physical RAM map: Sep 12 17:34:42.205261 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:34:42.205272 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:34:42.205284 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 12 17:34:42.205298 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 12 17:34:42.205311 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:34:42.205323 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:34:42.205339 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:34:42.205351 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:34:42.205364 kernel: NX (Execute Disable) protection: active Sep 12 17:34:42.205376 kernel: APIC: Static calls initialized Sep 12 17:34:42.205389 kernel: efi: EFI v2.7 by EDK II Sep 12 17:34:42.205405 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:34:42.205422 kernel: SMBIOS 2.7 present. Sep 12 17:34:42.205436 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:34:42.205450 kernel: Hypervisor detected: KVM Sep 12 17:34:42.205463 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:34:42.205477 kernel: kvm-clock: using sched offset of 3788934834 cycles Sep 12 17:34:42.205492 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:34:42.205506 kernel: tsc: Detected 2500.006 MHz processor Sep 12 17:34:42.205521 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:34:42.205535 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:34:42.205549 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:34:42.205567 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:34:42.205581 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:34:42.205595 kernel: Using GB pages for direct mapping Sep 12 17:34:42.205609 kernel: Secure boot disabled Sep 12 17:34:42.205622 kernel: ACPI: Early table checksum verification disabled Sep 12 17:34:42.205636 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:34:42.205651 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:34:42.205665 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:34:42.205679 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:34:42.205696 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:34:42.205710 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:34:42.205724 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:34:42.205738 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:34:42.205752 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:34:42.205767 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:34:42.205786 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:34:42.205846 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:34:42.205862 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:34:42.205877 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:34:42.205890 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:34:42.205905 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:34:42.205921 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:34:42.205937 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:34:42.205957 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:34:42.205973 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:34:42.205990 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:34:42.206006 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:34:42.206020 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:34:42.206036 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:34:42.206051 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:34:42.206066 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:34:42.206082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:34:42.206101 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:34:42.206116 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 12 17:34:42.206131 kernel: Zone ranges: Sep 12 17:34:42.206146 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:34:42.206162 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:34:42.206178 kernel: Normal empty Sep 12 17:34:42.206193 kernel: Movable zone start for each node Sep 12 17:34:42.206209 kernel: Early memory node ranges Sep 12 17:34:42.206225 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:34:42.206243 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:34:42.206258 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:34:42.206274 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:34:42.206290 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:34:42.206305 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:34:42.206321 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:34:42.206337 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:34:42.206352 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:34:42.206368 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:34:42.206386 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:34:42.206402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:34:42.206418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:34:42.206433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:34:42.206449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:34:42.206465 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:34:42.206481 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:34:42.206496 kernel: TSC deadline timer available Sep 12 17:34:42.206512 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:34:42.206527 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:34:42.206546 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:34:42.206562 kernel: Booting paravirtualized kernel on KVM Sep 12 17:34:42.206578 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:34:42.206593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:34:42.206609 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:34:42.206626 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:34:42.206641 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:34:42.206656 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:34:42.206672 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:34:42.206692 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:42.206709 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:34:42.206724 kernel: random: crng init done Sep 12 17:34:42.206740 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:34:42.206755 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:34:42.206771 kernel: Fallback order for Node 0: 0 Sep 12 17:34:42.206786 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 12 17:34:42.206802 kernel: Policy zone: DMA32 Sep 12 17:34:42.211655 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:34:42.211670 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162936K reserved, 0K cma-reserved) Sep 12 17:34:42.211685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:34:42.211700 kernel: Kernel/User page tables isolation: enabled Sep 12 17:34:42.211715 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:34:42.211727 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:34:42.211744 kernel: Dynamic Preempt: voluntary Sep 12 17:34:42.211757 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:34:42.211778 kernel: rcu: RCU event tracing is enabled. Sep 12 17:34:42.211798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:34:42.211831 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:34:42.211847 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:34:42.211862 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:34:42.211877 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:34:42.211893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:34:42.211909 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:34:42.211941 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:34:42.211957 kernel: Console: colour dummy device 80x25 Sep 12 17:34:42.211974 kernel: printk: console [tty0] enabled Sep 12 17:34:42.211990 kernel: printk: console [ttyS0] enabled Sep 12 17:34:42.212007 kernel: ACPI: Core revision 20230628 Sep 12 17:34:42.212026 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:34:42.212041 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:34:42.212057 kernel: x2apic enabled Sep 12 17:34:42.212073 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:34:42.212090 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 12 17:34:42.212109 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Sep 12 17:34:42.212125 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:34:42.212140 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:34:42.212156 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:34:42.212171 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:34:42.212187 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:34:42.212202 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:34:42.212218 kernel: RETBleed: Vulnerable Sep 12 17:34:42.212233 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:34:42.212252 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:34:42.212267 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:34:42.212282 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:34:42.212297 kernel: active return thunk: its_return_thunk Sep 12 17:34:42.212312 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:34:42.212328 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:34:42.212343 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:34:42.212358 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:34:42.212374 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:34:42.212389 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:34:42.212404 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:34:42.212423 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:34:42.212437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:34:42.212453 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:34:42.212468 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:34:42.212483 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:34:42.212499 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:34:42.212514 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:34:42.212529 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:34:42.212545 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:34:42.212560 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:34:42.212575 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:34:42.212590 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:34:42.212608 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:34:42.212623 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:34:42.212639 kernel: landlock: Up and running. Sep 12 17:34:42.212653 kernel: SELinux: Initializing. Sep 12 17:34:42.212669 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:34:42.212685 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:34:42.212700 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:34:42.212716 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:34:42.212731 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:34:42.212747 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:34:42.212766 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:34:42.212781 kernel: signal: max sigframe size: 3632 Sep 12 17:34:42.212797 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:34:42.212824 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:34:42.212839 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:34:42.212854 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:34:42.212867 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:34:42.212880 kernel: .... node #0, CPUs: #1 Sep 12 17:34:42.212896 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:34:42.212921 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:34:42.212940 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:34:42.212960 kernel: smpboot: Max logical packages: 1 Sep 12 17:34:42.212980 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Sep 12 17:34:42.212999 kernel: devtmpfs: initialized Sep 12 17:34:42.213018 kernel: x86/mm: Memory block size: 128MB Sep 12 17:34:42.213037 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:34:42.213057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:34:42.213076 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:34:42.213099 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:34:42.213119 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:34:42.213139 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:34:42.213159 kernel: audit: type=2000 audit(1757698481.419:1): state=initialized audit_enabled=0 res=1 Sep 12 17:34:42.213178 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:34:42.213198 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:34:42.213218 kernel: cpuidle: using governor menu Sep 12 17:34:42.213238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:34:42.213259 kernel: dca service started, version 1.12.1 Sep 12 17:34:42.213282 kernel: PCI: Using configuration type 1 for base access Sep 12 17:34:42.213302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:34:42.213321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:34:42.213340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:34:42.213358 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:34:42.213376 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:34:42.213391 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:34:42.213404 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:34:42.213417 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:34:42.213435 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:34:42.213451 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:34:42.213465 kernel: ACPI: Interpreter enabled Sep 12 17:34:42.213480 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:34:42.213495 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:34:42.213509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:34:42.213524 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:34:42.213540 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:34:42.213555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:34:42.213857 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:34:42.214089 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:34:42.214288 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:34:42.214308 kernel: acpiphp: Slot [3] registered Sep 12 17:34:42.214326 kernel: acpiphp: Slot [4] registered Sep 12 17:34:42.214343 kernel: acpiphp: Slot [5] registered Sep 12 17:34:42.214360 kernel: acpiphp: Slot [6] registered Sep 12 17:34:42.214384 kernel: acpiphp: Slot [7] registered Sep 12 17:34:42.214400 kernel: acpiphp: Slot [8] registered Sep 12 17:34:42.214416 kernel: acpiphp: Slot [9] registered Sep 12 17:34:42.214434 kernel: acpiphp: Slot [10] registered Sep 12 17:34:42.214451 kernel: acpiphp: Slot [11] registered Sep 12 17:34:42.214468 kernel: acpiphp: Slot [12] registered Sep 12 17:34:42.214486 kernel: acpiphp: Slot [13] registered Sep 12 17:34:42.214503 kernel: acpiphp: Slot [14] registered Sep 12 17:34:42.214520 kernel: acpiphp: Slot [15] registered Sep 12 17:34:42.214542 kernel: acpiphp: Slot [16] registered Sep 12 17:34:42.214559 kernel: acpiphp: Slot [17] registered Sep 12 17:34:42.214577 kernel: acpiphp: Slot [18] registered Sep 12 17:34:42.214593 kernel: acpiphp: Slot [19] registered Sep 12 17:34:42.214609 kernel: acpiphp: Slot [20] registered Sep 12 17:34:42.214625 kernel: acpiphp: Slot [21] registered Sep 12 17:34:42.214643 kernel: acpiphp: Slot [22] registered Sep 12 17:34:42.214659 kernel: acpiphp: Slot [23] registered Sep 12 17:34:42.214677 kernel: acpiphp: Slot [24] registered Sep 12 17:34:42.214693 kernel: acpiphp: Slot [25] registered Sep 12 17:34:42.214714 kernel: acpiphp: Slot [26] registered Sep 12 17:34:42.214731 kernel: acpiphp: Slot [27] registered Sep 12 17:34:42.214748 kernel: acpiphp: Slot [28] registered Sep 12 17:34:42.214764 kernel: acpiphp: Slot [29] registered Sep 12 17:34:42.214781 kernel: acpiphp: Slot [30] registered Sep 12 17:34:42.214797 kernel: acpiphp: Slot [31] registered Sep 12 17:34:42.216857 kernel: PCI host bridge to bus 0000:00 Sep 12 17:34:42.217057 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:34:42.217189 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:34:42.217322 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:34:42.217447 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:34:42.217570 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:34:42.217695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:34:42.217876 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:34:42.218029 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:34:42.218185 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 12 17:34:42.218325 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:34:42.218465 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:34:42.218606 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:34:42.218746 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:34:42.219601 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:34:42.219760 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:34:42.219929 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:34:42.220082 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 12 17:34:42.220225 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 12 17:34:42.220366 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:34:42.220505 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 12 17:34:42.220647 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:34:42.220803 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:34:42.222044 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 12 17:34:42.222198 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:34:42.222340 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 12 17:34:42.222362 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:34:42.222380 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:34:42.222396 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:34:42.222414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:34:42.222434 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:34:42.222450 kernel: iommu: Default domain type: Translated Sep 12 17:34:42.222467 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:34:42.222484 kernel: efivars: Registered efivars operations Sep 12 17:34:42.222501 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:34:42.222517 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:34:42.222534 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:34:42.222551 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:34:42.222692 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:34:42.225698 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:34:42.225927 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:34:42.225954 kernel: vgaarb: loaded Sep 12 17:34:42.225972 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:34:42.225989 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:34:42.226005 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:34:42.226022 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:34:42.226040 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:34:42.226063 kernel: pnp: PnP ACPI init Sep 12 17:34:42.226080 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:34:42.226097 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:34:42.226114 kernel: NET: Registered PF_INET protocol family Sep 12 17:34:42.226131 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:34:42.226148 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:34:42.226165 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:34:42.226182 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:34:42.226199 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:34:42.226219 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:34:42.226236 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:34:42.226253 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:34:42.226270 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:34:42.226287 kernel: NET: Registered PF_XDP protocol family Sep 12 17:34:42.226426 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:34:42.226554 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:34:42.226680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:34:42.226850 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:34:42.226989 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:34:42.227149 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:34:42.227173 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:34:42.227191 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:34:42.227208 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 12 17:34:42.227225 kernel: clocksource: Switched to clocksource tsc Sep 12 17:34:42.227241 kernel: Initialise system trusted keyrings Sep 12 17:34:42.227258 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:34:42.227282 kernel: Key type asymmetric registered Sep 12 17:34:42.227299 kernel: Asymmetric key parser 'x509' registered Sep 12 17:34:42.227512 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:34:42.227530 kernel: io scheduler mq-deadline registered Sep 12 17:34:42.227546 kernel: io scheduler kyber registered Sep 12 17:34:42.227563 kernel: io scheduler bfq registered Sep 12 17:34:42.227580 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:34:42.227596 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:34:42.227612 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:34:42.227634 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:34:42.227650 kernel: i8042: Warning: Keylock active Sep 12 17:34:42.227666 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:34:42.227683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:34:42.228621 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:34:42.228793 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:34:42.229997 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:34:41 UTC (1757698481) Sep 12 17:34:42.230137 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:34:42.230165 kernel: intel_pstate: CPU model not supported Sep 12 17:34:42.230183 kernel: efifb: probing for efifb Sep 12 17:34:42.230199 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 12 17:34:42.230217 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:34:42.230233 kernel: efifb: scrolling: redraw Sep 12 17:34:42.230250 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:34:42.230267 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:34:42.230284 kernel: fb0: EFI VGA frame buffer device Sep 12 17:34:42.230301 kernel: pstore: Using crash dump compression: deflate Sep 12 17:34:42.230322 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:34:42.230339 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:34:42.230356 kernel: Segment Routing with IPv6 Sep 12 17:34:42.230373 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:34:42.230390 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:34:42.230406 kernel: Key type dns_resolver registered Sep 12 17:34:42.230447 kernel: IPI shorthand broadcast: enabled Sep 12 17:34:42.230471 kernel: sched_clock: Marking stable (515255573, 141078812)->(751949873, -95615488) Sep 12 17:34:42.230489 kernel: registered taskstats version 1 Sep 12 17:34:42.230510 kernel: Loading compiled-in X.509 certificates Sep 12 17:34:42.230528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:34:42.230545 kernel: Key type .fscrypt registered Sep 12 17:34:42.230563 kernel: Key type fscrypt-provisioning registered Sep 12 17:34:42.230580 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:34:42.230598 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:34:42.230616 kernel: ima: No architecture policies found Sep 12 17:34:42.230633 kernel: clk: Disabling unused clocks Sep 12 17:34:42.230654 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:34:42.230672 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:34:42.230690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:34:42.230708 kernel: Run /init as init process Sep 12 17:34:42.230726 kernel: with arguments: Sep 12 17:34:42.230743 kernel: /init Sep 12 17:34:42.230760 kernel: with environment: Sep 12 17:34:42.230777 kernel: HOME=/ Sep 12 17:34:42.230794 kernel: TERM=linux Sep 12 17:34:42.231898 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:34:42.231932 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:42.231954 systemd[1]: Detected virtualization amazon. Sep 12 17:34:42.231972 systemd[1]: Detected architecture x86-64. Sep 12 17:34:42.231989 systemd[1]: Running in initrd. Sep 12 17:34:42.232005 systemd[1]: No hostname configured, using default hostname. Sep 12 17:34:42.232021 systemd[1]: Hostname set to . Sep 12 17:34:42.232042 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:42.232058 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:34:42.232074 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:34:42.232090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:42.232108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:42.232128 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:34:42.232145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:42.232165 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:34:42.232183 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:34:42.232203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:34:42.232221 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:34:42.232237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:42.232259 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:42.232275 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:42.232292 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:42.232309 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:42.232326 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:42.232342 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:42.232358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:42.232376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:42.232393 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:42.232414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:42.232430 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:42.232447 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:42.232463 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:42.232479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:34:42.232497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:42.232514 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:34:42.232531 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:34:42.232552 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:42.232569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:42.232587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:42.232604 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:42.232620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:42.232638 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:34:42.232697 systemd-journald[178]: Collecting audit messages is disabled. Sep 12 17:34:42.232735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:42.232755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:42.232775 systemd-journald[178]: Journal started Sep 12 17:34:42.232823 systemd-journald[178]: Runtime Journal (/run/log/journal/ec22b4d3ed23b255c903bb25a4dce537) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:34:42.208214 systemd-modules-load[179]: Inserted module 'overlay' Sep 12 17:34:42.242853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:42.246835 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:42.249845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:42.259887 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:34:42.263585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:42.266070 kernel: Bridge firewalling registered Sep 12 17:34:42.264944 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 12 17:34:42.274635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:42.275640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:42.282982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:42.287222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:42.292650 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:42.297019 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:34:42.303885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:42.313392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:42.322182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:42.331917 dracut-cmdline[208]: dracut-dracut-053 Sep 12 17:34:42.336784 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:42.367927 systemd-resolved[213]: Positive Trust Anchors: Sep 12 17:34:42.367946 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:42.368011 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:42.375974 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 12 17:34:42.378959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:42.380784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:42.426850 kernel: SCSI subsystem initialized Sep 12 17:34:42.436833 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:34:42.448841 kernel: iscsi: registered transport (tcp) Sep 12 17:34:42.470969 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:34:42.471052 kernel: QLogic iSCSI HBA Driver Sep 12 17:34:42.512600 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:42.517015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:34:42.555483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:34:42.555558 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:34:42.555956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:34:42.599868 kernel: raid6: avx512x4 gen() 18073 MB/s Sep 12 17:34:42.617866 kernel: raid6: avx512x2 gen() 18005 MB/s Sep 12 17:34:42.635865 kernel: raid6: avx512x1 gen() 17210 MB/s Sep 12 17:34:42.653866 kernel: raid6: avx2x4 gen() 17789 MB/s Sep 12 17:34:42.671873 kernel: raid6: avx2x2 gen() 17343 MB/s Sep 12 17:34:42.690089 kernel: raid6: avx2x1 gen() 13426 MB/s Sep 12 17:34:42.690161 kernel: raid6: using algorithm avx512x4 gen() 18073 MB/s Sep 12 17:34:42.709067 kernel: raid6: .... xor() 7818 MB/s, rmw enabled Sep 12 17:34:42.709142 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:34:42.730850 kernel: xor: automatically using best checksumming function avx Sep 12 17:34:42.891845 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:34:42.903063 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:42.914214 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:42.928845 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 12 17:34:42.935109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:42.942013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:34:42.966777 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Sep 12 17:34:42.998203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:43.004046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:43.062442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:43.073075 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:34:43.101992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:43.104477 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:43.107341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:43.108498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:43.118424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:34:43.144497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:43.179677 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:34:43.179997 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:34:43.186729 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:34:43.190867 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:34:43.199869 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:d8:4f:f9:b2:c9 Sep 12 17:34:43.213970 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:34:43.214042 kernel: AES CTR mode by8 optimization enabled Sep 12 17:34:43.214756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:43.215093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:43.217156 (udev-worker)[439]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:34:43.217794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:43.232253 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:34:43.232518 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:34:43.218381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:43.218679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:43.220637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:43.237227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:43.247550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:43.251102 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:34:43.247682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:43.258215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:34:43.258287 kernel: GPT:9289727 != 16777215 Sep 12 17:34:43.261460 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:34:43.261529 kernel: GPT:9289727 != 16777215 Sep 12 17:34:43.261549 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:34:43.261567 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:34:43.264947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:43.283111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:43.289097 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:43.313500 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:43.345831 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (452) Sep 12 17:34:43.382895 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (438) Sep 12 17:34:43.408233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:34:43.445150 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:34:43.454130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:34:43.465098 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:34:43.465780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:34:43.473043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:34:43.480454 disk-uuid[625]: Primary Header is updated. Sep 12 17:34:43.480454 disk-uuid[625]: Secondary Entries is updated. Sep 12 17:34:43.480454 disk-uuid[625]: Secondary Header is updated. Sep 12 17:34:43.486873 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:34:43.492885 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:34:43.501839 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:34:44.510850 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:34:44.517594 disk-uuid[626]: The operation has completed successfully. Sep 12 17:34:44.664078 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:34:44.664207 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:34:44.691144 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:34:44.710585 sh[967]: Success Sep 12 17:34:44.733837 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:34:44.832800 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:34:44.841069 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:34:44.842320 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:34:44.872979 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:34:44.873042 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:44.876120 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:34:44.876187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:34:44.877421 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:34:44.995845 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:34:45.019741 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:34:45.020791 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:34:45.026041 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:34:45.029991 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:34:45.053882 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:45.053956 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:45.057031 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:34:45.063903 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:34:45.077562 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:34:45.080894 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:45.089396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:34:45.098091 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:34:45.135528 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:45.142034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:45.173993 systemd-networkd[1160]: lo: Link UP Sep 12 17:34:45.174005 systemd-networkd[1160]: lo: Gained carrier Sep 12 17:34:45.175986 systemd-networkd[1160]: Enumeration completed Sep 12 17:34:45.176458 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:45.176464 systemd-networkd[1160]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:45.177740 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:45.179858 systemd[1]: Reached target network.target - Network. Sep 12 17:34:45.181075 systemd-networkd[1160]: eth0: Link UP Sep 12 17:34:45.181081 systemd-networkd[1160]: eth0: Gained carrier Sep 12 17:34:45.181096 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:45.200913 systemd-networkd[1160]: eth0: DHCPv4 address 172.31.16.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:34:45.575978 ignition[1102]: Ignition 2.19.0 Sep 12 17:34:45.575993 ignition[1102]: Stage: fetch-offline Sep 12 17:34:45.576279 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:45.576292 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:45.576880 ignition[1102]: Ignition finished successfully Sep 12 17:34:45.579279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:45.583073 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:34:45.600477 ignition[1171]: Ignition 2.19.0 Sep 12 17:34:45.600488 ignition[1171]: Stage: fetch Sep 12 17:34:45.600997 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:45.601013 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:45.601139 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:45.611491 ignition[1171]: PUT result: OK Sep 12 17:34:45.615504 ignition[1171]: parsed url from cmdline: "" Sep 12 17:34:45.615649 ignition[1171]: no config URL provided Sep 12 17:34:45.615676 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:34:45.615695 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:34:45.615716 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:45.618924 ignition[1171]: PUT result: OK Sep 12 17:34:45.618995 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:34:45.619669 ignition[1171]: GET result: OK Sep 12 17:34:45.619760 ignition[1171]: parsing config with SHA512: 7a040bbb948f47aa90bd7682825bf63c8fed3954952658b14e57904a343a23252420e9a5669cf1097bcf8442fd716a1be5c250adc40deb90ade09b1c4909346c Sep 12 17:34:45.625581 unknown[1171]: fetched base config from "system" Sep 12 17:34:45.626154 unknown[1171]: fetched base config from "system" Sep 12 17:34:45.626644 ignition[1171]: fetch: fetch complete Sep 12 17:34:45.626160 unknown[1171]: fetched user config from "aws" Sep 12 17:34:45.626649 ignition[1171]: fetch: fetch passed Sep 12 17:34:45.626713 ignition[1171]: Ignition finished successfully Sep 12 17:34:45.629303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:34:45.633081 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:34:45.651679 ignition[1177]: Ignition 2.19.0 Sep 12 17:34:45.651694 ignition[1177]: Stage: kargs Sep 12 17:34:45.652268 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:45.652283 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:45.652410 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:45.653313 ignition[1177]: PUT result: OK Sep 12 17:34:45.655867 ignition[1177]: kargs: kargs passed Sep 12 17:34:45.655947 ignition[1177]: Ignition finished successfully Sep 12 17:34:45.657303 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:34:45.663067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:34:45.679668 ignition[1183]: Ignition 2.19.0 Sep 12 17:34:45.679682 ignition[1183]: Stage: disks Sep 12 17:34:45.680207 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:45.680220 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:45.680343 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:45.681277 ignition[1183]: PUT result: OK Sep 12 17:34:45.684520 ignition[1183]: disks: disks passed Sep 12 17:34:45.684600 ignition[1183]: Ignition finished successfully Sep 12 17:34:45.686566 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:34:45.687883 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:45.688255 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:45.688842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:45.689393 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:45.689965 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:45.695045 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:34:45.727567 systemd-fsck[1191]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:34:45.730747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:34:45.737275 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:34:45.840835 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:34:45.841207 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:34:45.842345 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:34:45.849388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:45.851939 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:34:45.853991 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:34:45.854040 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:34:45.854065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:45.871842 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1211) Sep 12 17:34:45.872391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:34:45.880873 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:45.880914 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:45.880934 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:34:45.880952 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:34:45.886056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:34:45.889938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:46.203242 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:34:46.219532 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:34:46.224195 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:34:46.229048 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:34:46.420963 systemd-networkd[1160]: eth0: Gained IPv6LL Sep 12 17:34:46.500559 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:46.509069 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:34:46.518042 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:34:46.528029 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:34:46.530115 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:46.561704 ignition[1323]: INFO : Ignition 2.19.0 Sep 12 17:34:46.563631 ignition[1323]: INFO : Stage: mount Sep 12 17:34:46.563631 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:46.563631 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:46.563631 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:46.563703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:34:46.566310 ignition[1323]: INFO : PUT result: OK Sep 12 17:34:46.570657 ignition[1323]: INFO : mount: mount passed Sep 12 17:34:46.572061 ignition[1323]: INFO : Ignition finished successfully Sep 12 17:34:46.572850 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:34:46.579010 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:34:46.596217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:46.614903 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1335) Sep 12 17:34:46.620285 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:46.620375 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:46.620398 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:34:46.632869 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:34:46.637202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:46.660841 ignition[1352]: INFO : Ignition 2.19.0 Sep 12 17:34:46.660841 ignition[1352]: INFO : Stage: files Sep 12 17:34:46.662236 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:46.662236 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:46.662236 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:46.663558 ignition[1352]: INFO : PUT result: OK Sep 12 17:34:46.665798 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:34:46.666547 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:34:46.669069 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:34:46.682748 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:34:46.683919 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:34:46.683919 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:34:46.683474 unknown[1352]: wrote ssh authorized keys file for user: core Sep 12 17:34:46.686916 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:46.686916 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:46.686916 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:46.686916 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:34:46.768393 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:34:47.041327 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:47.041327 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:47.042938 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:34:47.399607 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:34:47.897240 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:47.898643 ignition[1352]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:34:47.915054 ignition[1352]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:47.916671 ignition[1352]: INFO : files: files passed Sep 12 17:34:47.916671 ignition[1352]: INFO : Ignition finished successfully Sep 12 17:34:47.918008 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:34:47.942201 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:34:47.963195 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:34:47.975087 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:34:47.975221 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:34:48.005489 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:48.005489 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:48.009193 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:48.010561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:48.012208 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:34:48.020330 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:34:48.047896 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:34:48.048050 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:34:48.049233 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:34:48.050315 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:34:48.051149 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:34:48.060095 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:34:48.074124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:48.080071 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:34:48.095028 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:48.095780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:48.096791 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:34:48.097673 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:34:48.097878 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:48.099122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:34:48.100307 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:34:48.101073 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:34:48.101846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:48.102740 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:48.103664 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:34:48.104540 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:48.105421 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:34:48.106693 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:34:48.107605 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:34:48.108346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:34:48.108531 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:48.109635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:48.110441 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:48.111122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:34:48.111570 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:48.112073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:34:48.112249 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:48.113629 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:34:48.113847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:48.114546 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:34:48.114705 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:34:48.126118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:34:48.126777 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:34:48.126994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:48.130137 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:34:48.130871 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:34:48.131115 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:48.135184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:34:48.135437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:48.149967 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:34:48.150102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:34:48.156617 ignition[1405]: INFO : Ignition 2.19.0 Sep 12 17:34:48.157584 ignition[1405]: INFO : Stage: umount Sep 12 17:34:48.158237 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:48.158237 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:34:48.158237 ignition[1405]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:34:48.161110 ignition[1405]: INFO : PUT result: OK Sep 12 17:34:48.164774 ignition[1405]: INFO : umount: umount passed Sep 12 17:34:48.164774 ignition[1405]: INFO : Ignition finished successfully Sep 12 17:34:48.167165 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:34:48.167310 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:34:48.168539 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:34:48.168655 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:34:48.169598 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:34:48.169661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:34:48.170334 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:34:48.170395 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:34:48.170987 systemd[1]: Stopped target network.target - Network. Sep 12 17:34:48.171771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:34:48.171858 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:48.172480 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:34:48.173053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:34:48.177909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:48.178323 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:34:48.179179 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:34:48.179986 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:34:48.180036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:48.180573 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:34:48.180614 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:48.181132 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:34:48.181184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:34:48.181705 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:34:48.181746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:48.182509 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:34:48.183125 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:34:48.184971 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:34:48.186884 systemd-networkd[1160]: eth0: DHCPv6 lease lost Sep 12 17:34:48.189683 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:34:48.189853 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:34:48.191056 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:34:48.191144 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:48.198937 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:34:48.199424 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:34:48.199510 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:48.200430 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:48.205314 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:34:48.205467 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:34:48.216082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:34:48.216177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:48.216835 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:34:48.216898 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:48.220145 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:34:48.220252 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:48.221947 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:34:48.222150 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:48.224425 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:34:48.224562 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:34:48.226163 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:34:48.226228 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:48.226776 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:34:48.226910 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:48.227643 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:34:48.227707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:48.228755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:34:48.228869 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:48.229908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:48.229952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:48.236089 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:34:48.237284 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:34:48.237907 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:48.239885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:48.239967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:48.246406 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:34:48.246543 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:34:48.321041 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:34:48.321157 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:34:48.322341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:34:48.322747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:34:48.322802 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:48.330022 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:34:48.346464 systemd[1]: Switching root. Sep 12 17:34:48.372800 systemd-journald[178]: Journal stopped Sep 12 17:34:50.315585 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 12 17:34:50.315680 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:34:50.315708 kernel: SELinux: policy capability open_perms=1 Sep 12 17:34:50.315727 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:34:50.315753 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:34:50.315773 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:34:50.315793 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:34:50.315851 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:34:50.315873 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:34:50.315897 kernel: audit: type=1403 audit(1757698488.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:34:50.315919 systemd[1]: Successfully loaded SELinux policy in 49.707ms. Sep 12 17:34:50.315955 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.261ms. Sep 12 17:34:50.315978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:50.316000 systemd[1]: Detected virtualization amazon. Sep 12 17:34:50.316021 systemd[1]: Detected architecture x86-64. Sep 12 17:34:50.316042 systemd[1]: Detected first boot. Sep 12 17:34:50.316063 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:50.316084 zram_generator::config[1464]: No configuration found. Sep 12 17:34:50.316115 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:34:50.316137 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:34:50.316158 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:34:50.316182 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:34:50.316204 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:34:50.316225 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:34:50.316245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:34:50.316265 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:34:50.316288 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:34:50.316306 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:34:50.316329 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:34:50.316347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:50.316365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:50.316383 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:34:50.316403 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:34:50.316423 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:34:50.316450 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:50.316471 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:34:50.316489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:50.316509 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:34:50.316528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:50.316548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:50.316570 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:50.316590 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:50.316612 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:34:50.316637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:34:50.316657 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:50.316679 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:50.316700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:50.316721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:50.316740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:50.316764 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:34:50.316785 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:34:50.316828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:34:50.316856 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:34:50.316878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:50.316900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:34:50.316922 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:34:50.316943 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:34:50.316964 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:34:50.316986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:50.317014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:50.317039 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:34:50.317060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:50.317081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:50.317103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:50.317124 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:34:50.317145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:50.317166 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:34:50.317188 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:34:50.317210 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:34:50.317235 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:50.317260 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:50.317281 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:34:50.317303 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:34:50.317323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:50.317341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:50.317362 kernel: loop: module loaded Sep 12 17:34:50.317382 kernel: fuse: init (API version 7.39) Sep 12 17:34:50.317401 kernel: ACPI: bus type drm_connector registered Sep 12 17:34:50.317424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:34:50.317444 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:34:50.317464 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:34:50.317484 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:34:50.317503 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:34:50.317563 systemd-journald[1571]: Collecting audit messages is disabled. Sep 12 17:34:50.317599 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:34:50.317622 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:34:50.317643 systemd-journald[1571]: Journal started Sep 12 17:34:50.317680 systemd-journald[1571]: Runtime Journal (/run/log/journal/ec22b4d3ed23b255c903bb25a4dce537) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:34:50.319475 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:50.323187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:50.324251 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:34:50.324457 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:34:50.325196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:50.325404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:50.326149 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:50.326338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:50.327021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:50.327215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:50.327989 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:34:50.328170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:34:50.328798 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:50.329416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:50.330547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:50.331661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:34:50.332919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:34:50.348127 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:34:50.355954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:34:50.364084 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:34:50.365165 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:34:50.379117 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:34:50.385974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:34:50.387931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:50.402068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:34:50.404960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:50.420120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:50.424980 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:50.434499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:34:50.439085 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:34:50.450233 systemd-journald[1571]: Time spent on flushing to /var/log/journal/ec22b4d3ed23b255c903bb25a4dce537 is 72.504ms for 972 entries. Sep 12 17:34:50.450233 systemd-journald[1571]: System Journal (/var/log/journal/ec22b4d3ed23b255c903bb25a4dce537) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:34:50.534898 systemd-journald[1571]: Received client request to flush runtime journal. Sep 12 17:34:50.472593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:34:50.473424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:34:50.489493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:50.534344 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Sep 12 17:34:50.534368 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Sep 12 17:34:50.538184 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:34:50.543558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:50.550728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:50.561115 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:34:50.569045 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:34:50.606419 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:34:50.627794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:34:50.636050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:50.667740 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Sep 12 17:34:50.667770 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Sep 12 17:34:50.673401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:51.306281 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:34:51.315025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:51.340990 systemd-udevd[1646]: Using default interface naming scheme 'v255'. Sep 12 17:34:51.415457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:51.426740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:51.467156 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:34:51.526038 (udev-worker)[1658]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:34:51.530191 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:34:51.595981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:34:51.617841 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:34:51.621875 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:34:51.632763 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:34:51.632850 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:34:51.636843 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 12 17:34:51.641866 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:34:51.699851 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:34:51.724075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:51.725012 systemd-networkd[1650]: lo: Link UP Sep 12 17:34:51.725019 systemd-networkd[1650]: lo: Gained carrier Sep 12 17:34:51.727331 systemd-networkd[1650]: Enumeration completed Sep 12 17:34:51.727561 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:51.729177 systemd-networkd[1650]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:51.729308 systemd-networkd[1650]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:51.734197 systemd-networkd[1650]: eth0: Link UP Sep 12 17:34:51.734401 systemd-networkd[1650]: eth0: Gained carrier Sep 12 17:34:51.734434 systemd-networkd[1650]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:51.742205 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:34:51.756925 systemd-networkd[1650]: eth0: DHCPv4 address 172.31.16.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:34:51.758188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:51.758534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:51.772021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:51.818875 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1658) Sep 12 17:34:51.943954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:51.980934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:34:51.982030 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:34:51.993311 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:34:52.033447 lvm[1773]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:52.059531 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:34:52.061388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:52.072149 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:34:52.078157 lvm[1776]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:52.106800 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:34:52.109053 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:52.110168 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:34:52.110392 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:52.111105 systemd[1]: Reached target machines.target - Containers. Sep 12 17:34:52.112854 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:34:52.118031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:34:52.121248 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:34:52.123785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:52.130213 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:34:52.134348 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:34:52.138053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:34:52.141268 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:34:52.155344 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:34:52.167837 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:34:52.181078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:34:52.182217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:34:52.298828 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:34:52.339156 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:34:52.461851 kernel: loop2: detected capacity change from 0 to 61336 Sep 12 17:34:52.521844 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:34:52.629855 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:34:52.655952 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:34:52.674845 kernel: loop6: detected capacity change from 0 to 61336 Sep 12 17:34:52.696837 kernel: loop7: detected capacity change from 0 to 140768 Sep 12 17:34:52.722733 (sd-merge)[1797]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:34:52.724374 (sd-merge)[1797]: Merged extensions into '/usr'. Sep 12 17:34:52.730249 systemd[1]: Reloading requested from client PID 1784 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:34:52.730268 systemd[1]: Reloading... Sep 12 17:34:52.814855 zram_generator::config[1825]: No configuration found. Sep 12 17:34:52.983526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:53.076276 systemd[1]: Reloading finished in 345 ms. Sep 12 17:34:53.092524 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:34:53.110092 systemd[1]: Starting ensure-sysext.service... Sep 12 17:34:53.120143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:53.124819 systemd[1]: Reloading requested from client PID 1882 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:34:53.125007 systemd[1]: Reloading... Sep 12 17:34:53.143960 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:34:53.144496 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:34:53.145905 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:34:53.146326 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Sep 12 17:34:53.146418 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Sep 12 17:34:53.161502 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:53.161522 systemd-tmpfiles[1883]: Skipping /boot Sep 12 17:34:53.172000 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:53.172015 systemd-tmpfiles[1883]: Skipping /boot Sep 12 17:34:53.252833 zram_generator::config[1914]: No configuration found. Sep 12 17:34:53.406679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:53.498499 systemd[1]: Reloading finished in 372 ms. Sep 12 17:34:53.517328 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:53.526674 ldconfig[1780]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:34:53.537099 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:53.543027 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:34:53.551072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:34:53.562561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:53.569479 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:34:53.575094 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:34:53.589190 systemd-networkd[1650]: eth0: Gained IPv6LL Sep 12 17:34:53.596565 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:34:53.614037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:53.614499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:53.622244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:53.635253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:53.646077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:53.648017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:53.651304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:53.654592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:53.657205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:53.662585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:53.662997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:53.669985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:53.672284 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:34:53.689380 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:53.691790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:53.697062 augenrules[2003]: No rules Sep 12 17:34:53.706576 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:53.708835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:34:53.715005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:53.715519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:53.725091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:53.735074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:53.749262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:53.756308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:53.757088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:53.757179 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:34:53.766343 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:34:53.769509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:53.770488 systemd[1]: Finished ensure-sysext.service. Sep 12 17:34:53.775879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:53.776132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:53.777183 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:53.777431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:53.783856 systemd-resolved[1979]: Positive Trust Anchors: Sep 12 17:34:53.786533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:53.786800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:53.786863 systemd-resolved[1979]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:53.786920 systemd-resolved[1979]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:53.792068 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:53.794185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:53.803747 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:53.804266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:53.807572 systemd-resolved[1979]: Defaulting to hostname 'linux'. Sep 12 17:34:53.811372 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:34:53.813159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:53.814743 systemd[1]: Reached target network.target - Network. Sep 12 17:34:53.815482 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:34:53.816041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:53.836629 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:34:53.837751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:34:53.837788 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:53.838300 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:34:53.838678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:34:53.839296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:34:53.839717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:34:53.840062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:34:53.840389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:34:53.840423 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:53.840718 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:53.842320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:34:53.844475 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:34:53.846280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:34:53.853064 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:34:53.853729 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:53.854319 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:53.855073 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:34:53.855128 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:53.855335 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:53.857977 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:34:53.865019 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:34:53.872490 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:34:53.874929 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:34:53.882632 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:34:53.883910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:34:53.910914 jq[2045]: false Sep 12 17:34:53.909959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:53.928174 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:34:53.933145 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:34:53.953023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:34:53.958269 extend-filesystems[2046]: Found loop4 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found loop5 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found loop6 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found loop7 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p1 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p2 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p3 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found usr Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p4 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p6 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p7 Sep 12 17:34:53.958269 extend-filesystems[2046]: Found nvme0n1p9 Sep 12 17:34:53.958269 extend-filesystems[2046]: Checking size of /dev/nvme0n1p9 Sep 12 17:34:53.965186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:34:53.980937 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:34:53.992018 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:34:54.002955 dbus-daemon[2044]: [system] SELinux support is enabled Sep 12 17:34:54.004543 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:34:54.018590 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:34:54.020469 dbus-daemon[2044]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1650 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:34:54.022363 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:34:54.035044 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:34:54.046934 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:34:54.055288 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:34:54.071864 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:34:54.072211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:34:54.085507 extend-filesystems[2046]: Resized partition /dev/nvme0n1p9 Sep 12 17:34:54.093912 extend-filesystems[2087]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:34:54.091269 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:34:54.110592 jq[2074]: true Sep 12 17:34:54.128036 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:34:54.097232 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:34:54.099036 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:34:54.121547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:34:54.123988 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:34:54.138125 update_engine[2073]: I20250912 17:34:54.137609 2073 main.cc:92] Flatcar Update Engine starting Sep 12 17:34:54.141627 update_engine[2073]: I20250912 17:34:54.141357 2073 update_check_scheduler.cc:74] Next update check in 2m28s Sep 12 17:34:54.145055 ntpd[2053]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: ---------------------------------------------------- Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: corporation. Support and training for ntp-4 are Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: available at https://www.nwtime.org/support Sep 12 17:34:54.150311 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: ---------------------------------------------------- Sep 12 17:34:54.145083 ntpd[2053]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:34:54.145106 ntpd[2053]: ---------------------------------------------------- Sep 12 17:34:54.156274 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: proto: precision = 0.063 usec (-24) Sep 12 17:34:54.145117 ntpd[2053]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:34:54.145127 ntpd[2053]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:34:54.145137 ntpd[2053]: corporation. Support and training for ntp-4 are Sep 12 17:34:54.145147 ntpd[2053]: available at https://www.nwtime.org/support Sep 12 17:34:54.145157 ntpd[2053]: ---------------------------------------------------- Sep 12 17:34:54.154667 ntpd[2053]: proto: precision = 0.063 usec (-24) Sep 12 17:34:54.167606 ntpd[2053]: basedate set to 2025-08-31 Sep 12 17:34:54.167640 ntpd[2053]: gps base set to 2025-08-31 (week 2382) Sep 12 17:34:54.167781 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: basedate set to 2025-08-31 Sep 12 17:34:54.167781 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: gps base set to 2025-08-31 (week 2382) Sep 12 17:34:54.182100 coreos-metadata[2042]: Sep 12 17:34:54.181 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:34:54.188146 coreos-metadata[2042]: Sep 12 17:34:54.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:34:54.188146 coreos-metadata[2042]: Sep 12 17:34:54.186 INFO Fetch successful Sep 12 17:34:54.188146 coreos-metadata[2042]: Sep 12 17:34:54.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:34:54.188146 coreos-metadata[2042]: Sep 12 17:34:54.187 INFO Fetch successful Sep 12 17:34:54.188146 coreos-metadata[2042]: Sep 12 17:34:54.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen normally on 3 eth0 172.31.16.204:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen normally on 4 lo [::1]:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listen normally on 5 eth0 [fe80::4d8:4fff:fef9:b2c9%2]:123 Sep 12 17:34:54.188359 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: Listening on routing socket on fd #22 for interface updates Sep 12 17:34:54.183619 ntpd[2053]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:34:54.183677 ntpd[2053]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:34:54.183903 ntpd[2053]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:34:54.183946 ntpd[2053]: Listen normally on 3 eth0 172.31.16.204:123 Sep 12 17:34:54.183993 ntpd[2053]: Listen normally on 4 lo [::1]:123 Sep 12 17:34:54.184042 ntpd[2053]: Listen normally on 5 eth0 [fe80::4d8:4fff:fef9:b2c9%2]:123 Sep 12 17:34:54.184079 ntpd[2053]: Listening on routing socket on fd #22 for interface updates Sep 12 17:34:54.191043 coreos-metadata[2042]: Sep 12 17:34:54.189 INFO Fetch successful Sep 12 17:34:54.191043 coreos-metadata[2042]: Sep 12 17:34:54.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:34:54.191043 coreos-metadata[2042]: Sep 12 17:34:54.190 INFO Fetch successful Sep 12 17:34:54.191043 coreos-metadata[2042]: Sep 12 17:34:54.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:34:54.193404 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:34:54.193404 ntpd[2053]: 12 Sep 17:34:54 ntpd[2053]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:34:54.190610 ntpd[2053]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:34:54.193151 (ntainerd)[2102]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:34:54.196939 coreos-metadata[2042]: Sep 12 17:34:54.191 INFO Fetch failed with 404: resource not found Sep 12 17:34:54.196939 coreos-metadata[2042]: Sep 12 17:34:54.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:34:54.196939 coreos-metadata[2042]: Sep 12 17:34:54.194 INFO Fetch successful Sep 12 17:34:54.196939 coreos-metadata[2042]: Sep 12 17:34:54.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:34:54.190643 ntpd[2053]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.198 INFO Fetch successful Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.198 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.199 INFO Fetch successful Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.199 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.201 INFO Fetch successful Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.201 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:34:54.216585 coreos-metadata[2042]: Sep 12 17:34:54.203 INFO Fetch successful Sep 12 17:34:54.213725 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:34:54.232873 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:34:54.251925 tar[2094]: linux-amd64/helm Sep 12 17:34:54.251694 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:34:54.251730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:34:54.261035 jq[2100]: true Sep 12 17:34:54.275281 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:34:54.276942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:34:54.276980 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:34:54.280312 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:34:54.285597 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:34:54.298401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:34:54.310864 extend-filesystems[2087]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:34:54.310864 extend-filesystems[2087]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:34:54.310864 extend-filesystems[2087]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:34:54.320882 extend-filesystems[2046]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:34:54.313393 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:34:54.313727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:34:54.325082 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1658) Sep 12 17:34:54.327442 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:34:54.397607 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:34:54.423180 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:34:54.424308 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:34:54.664880 bash[2241]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:34:54.667374 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:34:54.668305 systemd-logind[2070]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:34:54.668330 systemd-logind[2070]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:34:54.668353 systemd-logind[2070]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:34:54.679981 systemd-logind[2070]: New seat seat0. Sep 12 17:34:54.692139 systemd[1]: Starting sshkeys.service... Sep 12 17:34:54.700040 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:34:54.745626 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:34:54.755559 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:34:54.825587 amazon-ssm-agent[2168]: Initializing new seelog logger Sep 12 17:34:54.828895 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Sep 12 17:34:54.828895 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.828895 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.828895 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 processing appconfig overrides Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 processing appconfig overrides Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 processing appconfig overrides Sep 12 17:34:54.841838 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO Proxy environment variables: Sep 12 17:34:54.862213 locksmithd[2127]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:34:54.865178 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.865178 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:34:54.865178 amazon-ssm-agent[2168]: 2025/09/12 17:34:54 processing appconfig overrides Sep 12 17:34:54.928250 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:34:54.928530 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:34:54.932226 dbus-daemon[2044]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2126 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:34:54.943419 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO https_proxy: Sep 12 17:34:54.947299 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:34:54.977108 polkitd[2275]: Started polkitd version 121 Sep 12 17:34:54.991539 polkitd[2275]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:34:54.998470 polkitd[2275]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:34:55.005593 polkitd[2275]: Finished loading, compiling and executing 2 rules Sep 12 17:34:55.006477 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:34:55.006266 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:34:55.010618 polkitd[2275]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:34:55.032494 coreos-metadata[2261]: Sep 12 17:34:55.032 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:34:55.034730 coreos-metadata[2261]: Sep 12 17:34:55.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:34:55.036297 coreos-metadata[2261]: Sep 12 17:34:55.035 INFO Fetch successful Sep 12 17:34:55.036297 coreos-metadata[2261]: Sep 12 17:34:55.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:34:55.037029 coreos-metadata[2261]: Sep 12 17:34:55.036 INFO Fetch successful Sep 12 17:34:55.042500 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO http_proxy: Sep 12 17:34:55.042885 unknown[2261]: wrote ssh authorized keys file for user: core Sep 12 17:34:55.073499 sshd_keygen[2105]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:34:55.086358 systemd-hostnamed[2126]: Hostname set to (transient) Sep 12 17:34:55.087191 systemd-resolved[1979]: System hostname changed to 'ip-172-31-16-204'. Sep 12 17:34:55.100837 update-ssh-keys[2286]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:34:55.104571 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:34:55.108154 systemd[1]: Finished sshkeys.service. Sep 12 17:34:55.145829 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO no_proxy: Sep 12 17:34:55.203996 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:34:55.218995 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:34:55.240402 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:34:55.240749 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:34:55.248186 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:34:55.249041 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:34:55.320650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:34:55.338406 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:34:55.347377 amazon-ssm-agent[2168]: 2025-09-12 17:34:54 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:34:55.349341 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:34:55.353895 containerd[2102]: time="2025-09-12T17:34:55.352598389Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:34:55.353361 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:34:55.446147 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO Agent will take identity from EC2 Sep 12 17:34:55.455624 containerd[2102]: time="2025-09-12T17:34:55.455490454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.460289 containerd[2102]: time="2025-09-12T17:34:55.460236364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460416260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460448893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460634891Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460656358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460735736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.460752918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461053913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461076037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461095448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461111457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461217451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.461864 containerd[2102]: time="2025-09-12T17:34:55.461462137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:55.462363 containerd[2102]: time="2025-09-12T17:34:55.461663226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:55.462363 containerd[2102]: time="2025-09-12T17:34:55.461684986Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:34:55.462363 containerd[2102]: time="2025-09-12T17:34:55.461773669Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:34:55.463505 containerd[2102]: time="2025-09-12T17:34:55.463478484Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:34:55.468623 containerd[2102]: time="2025-09-12T17:34:55.468582766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:34:55.469122 containerd[2102]: time="2025-09-12T17:34:55.469101944Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:34:55.469246 containerd[2102]: time="2025-09-12T17:34:55.469231628Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:34:55.469321 containerd[2102]: time="2025-09-12T17:34:55.469307797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:34:55.469394 containerd[2102]: time="2025-09-12T17:34:55.469381564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:34:55.469629 containerd[2102]: time="2025-09-12T17:34:55.469611714Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:34:55.470650 containerd[2102]: time="2025-09-12T17:34:55.470626905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:34:55.471344 containerd[2102]: time="2025-09-12T17:34:55.471321379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:34:55.471433 containerd[2102]: time="2025-09-12T17:34:55.471419612Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:34:55.471506 containerd[2102]: time="2025-09-12T17:34:55.471492588Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:34:55.471575 containerd[2102]: time="2025-09-12T17:34:55.471562974Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.471658 containerd[2102]: time="2025-09-12T17:34:55.471645269Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.471829 containerd[2102]: time="2025-09-12T17:34:55.471789331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472264350Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472293607Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472314390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472334493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472357238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472388099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472409633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472430499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472452000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472476860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472496998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472515525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472535059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.472867 containerd[2102]: time="2025-09-12T17:34:55.472568610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472593283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472613176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472631272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472651180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472674784Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472706281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472724299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472740999Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:34:55.473415 containerd[2102]: time="2025-09-12T17:34:55.472793124Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474468099Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474496884Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474518648Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474534756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474554929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474570092Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:34:55.476826 containerd[2102]: time="2025-09-12T17:34:55.474585405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.475097383Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.475193825Z" level=info msg="Connect containerd service" Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.475262972Z" level=info msg="using legacy CRI server" Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.475274302Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.475414022Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:34:55.477134 containerd[2102]: time="2025-09-12T17:34:55.476125553Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478634517Z" level=info msg="Start subscribing containerd event" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478701138Z" level=info msg="Start recovering state" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478782009Z" level=info msg="Start event monitor" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478794845Z" level=info msg="Start snapshots syncer" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478824494Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.478836368Z" level=info msg="Start streaming server" Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.479139489Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:34:55.481265 containerd[2102]: time="2025-09-12T17:34:55.479195500Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:34:55.479416 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:34:55.481769 containerd[2102]: time="2025-09-12T17:34:55.481745013Z" level=info msg="containerd successfully booted in 0.133612s" Sep 12 17:34:55.544707 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:34:55.646825 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:34:55.744367 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:34:55.774162 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:34:55.774376 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:34:55.774460 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:34:55.774553 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:34:55.774631 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [Registrar] Starting registrar module Sep 12 17:34:55.774714 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:34:55.774787 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:34:55.774867 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:34:55.774942 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:34:55.775020 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:34:55.779654 tar[2094]: linux-amd64/LICENSE Sep 12 17:34:55.780177 tar[2094]: linux-amd64/README.md Sep 12 17:34:55.793326 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:34:55.843524 amazon-ssm-agent[2168]: 2025-09-12 17:34:55 INFO [CredentialRefresher] Next credential rotation will be in 30.758311423266665 minutes Sep 12 17:34:56.329326 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:34:56.338193 systemd[1]: Started sshd@0-172.31.16.204:22-147.75.109.163:41558.service - OpenSSH per-connection server daemon (147.75.109.163:41558). Sep 12 17:34:56.523957 sshd[2323]: Accepted publickey for core from 147.75.109.163 port 41558 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:34:56.526185 sshd[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:56.538301 systemd-logind[2070]: New session 1 of user core. Sep 12 17:34:56.541314 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:34:56.547279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:34:56.565262 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:34:56.574586 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:34:56.584464 (systemd)[2329]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:34:56.719997 systemd[2329]: Queued start job for default target default.target. Sep 12 17:34:56.720789 systemd[2329]: Created slice app.slice - User Application Slice. Sep 12 17:34:56.720859 systemd[2329]: Reached target paths.target - Paths. Sep 12 17:34:56.720882 systemd[2329]: Reached target timers.target - Timers. Sep 12 17:34:56.728830 systemd[2329]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:34:56.737500 systemd[2329]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:34:56.737587 systemd[2329]: Reached target sockets.target - Sockets. Sep 12 17:34:56.737608 systemd[2329]: Reached target basic.target - Basic System. Sep 12 17:34:56.737666 systemd[2329]: Reached target default.target - Main User Target. Sep 12 17:34:56.737703 systemd[2329]: Startup finished in 143ms. Sep 12 17:34:56.738262 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:34:56.744220 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:34:56.788696 amazon-ssm-agent[2168]: 2025-09-12 17:34:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:34:56.898430 systemd[1]: Started sshd@1-172.31.16.204:22-147.75.109.163:41562.service - OpenSSH per-connection server daemon (147.75.109.163:41562). Sep 12 17:34:56.904492 amazon-ssm-agent[2168]: 2025-09-12 17:34:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2341) started Sep 12 17:34:57.011828 amazon-ssm-agent[2168]: 2025-09-12 17:34:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:34:57.088042 sshd[2349]: Accepted publickey for core from 147.75.109.163 port 41562 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:34:57.089494 sshd[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:57.096982 systemd-logind[2070]: New session 2 of user core. Sep 12 17:34:57.104506 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:34:57.226875 sshd[2349]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:57.230928 systemd[1]: sshd@1-172.31.16.204:22-147.75.109.163:41562.service: Deactivated successfully. Sep 12 17:34:57.233972 systemd-logind[2070]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:34:57.234565 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:34:57.236144 systemd-logind[2070]: Removed session 2. Sep 12 17:34:57.255520 systemd[1]: Started sshd@2-172.31.16.204:22-147.75.109.163:41572.service - OpenSSH per-connection server daemon (147.75.109.163:41572). Sep 12 17:34:57.416398 sshd[2360]: Accepted publickey for core from 147.75.109.163 port 41572 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:34:57.417901 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:57.423468 systemd-logind[2070]: New session 3 of user core. Sep 12 17:34:57.429330 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:34:57.570152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:57.572905 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:57.573074 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:34:57.573784 systemd[1]: Startup finished in 7.871s (kernel) + 8.653s (userspace) = 16.524s. Sep 12 17:34:57.582713 sshd[2360]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:57.589218 systemd[1]: sshd@2-172.31.16.204:22-147.75.109.163:41572.service: Deactivated successfully. Sep 12 17:34:57.596057 systemd-logind[2070]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:34:57.599074 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:34:57.600284 systemd-logind[2070]: Removed session 3. Sep 12 17:34:58.937004 kubelet[2373]: E0912 17:34:58.936907 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:58.939413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:58.939644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:35:01.703081 systemd-resolved[1979]: Clock change detected. Flushing caches. Sep 12 17:35:08.170610 systemd[1]: Started sshd@3-172.31.16.204:22-147.75.109.163:34712.service - OpenSSH per-connection server daemon (147.75.109.163:34712). Sep 12 17:35:08.363996 sshd[2388]: Accepted publickey for core from 147.75.109.163 port 34712 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:08.367558 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:08.375152 systemd-logind[2070]: New session 4 of user core. Sep 12 17:35:08.378191 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:35:08.504654 sshd[2388]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:08.509602 systemd[1]: sshd@3-172.31.16.204:22-147.75.109.163:34712.service: Deactivated successfully. Sep 12 17:35:08.515588 systemd-logind[2070]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:35:08.515634 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:35:08.517237 systemd-logind[2070]: Removed session 4. Sep 12 17:35:08.541187 systemd[1]: Started sshd@4-172.31.16.204:22-147.75.109.163:34720.service - OpenSSH per-connection server daemon (147.75.109.163:34720). Sep 12 17:35:08.719315 sshd[2396]: Accepted publickey for core from 147.75.109.163 port 34720 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:08.722203 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:08.737025 systemd-logind[2070]: New session 5 of user core. Sep 12 17:35:08.741114 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:35:08.865640 sshd[2396]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:08.869312 systemd[1]: sshd@4-172.31.16.204:22-147.75.109.163:34720.service: Deactivated successfully. Sep 12 17:35:08.872264 systemd-logind[2070]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:35:08.873248 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:35:08.875550 systemd-logind[2070]: Removed session 5. Sep 12 17:35:08.894149 systemd[1]: Started sshd@5-172.31.16.204:22-147.75.109.163:34734.service - OpenSSH per-connection server daemon (147.75.109.163:34734). Sep 12 17:35:09.054509 sshd[2404]: Accepted publickey for core from 147.75.109.163 port 34734 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:09.056760 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:09.061542 systemd-logind[2070]: New session 6 of user core. Sep 12 17:35:09.068510 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:35:09.197787 sshd[2404]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:09.202161 systemd[1]: sshd@5-172.31.16.204:22-147.75.109.163:34734.service: Deactivated successfully. Sep 12 17:35:09.207682 systemd-logind[2070]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:35:09.208559 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:35:09.209821 systemd-logind[2070]: Removed session 6. Sep 12 17:35:09.228260 systemd[1]: Started sshd@6-172.31.16.204:22-147.75.109.163:34750.service - OpenSSH per-connection server daemon (147.75.109.163:34750). Sep 12 17:35:09.390285 sshd[2412]: Accepted publickey for core from 147.75.109.163 port 34750 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:09.391998 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:09.397979 systemd-logind[2070]: New session 7 of user core. Sep 12 17:35:09.407181 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:35:09.540797 sudo[2416]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:35:09.541102 sudo[2416]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:35:09.542187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:35:09.550378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:09.554224 sudo[2416]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:09.578468 sshd[2412]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:09.583071 systemd[1]: sshd@6-172.31.16.204:22-147.75.109.163:34750.service: Deactivated successfully. Sep 12 17:35:09.587271 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:35:09.588567 systemd-logind[2070]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:35:09.590093 systemd-logind[2070]: Removed session 7. Sep 12 17:35:09.609128 systemd[1]: Started sshd@7-172.31.16.204:22-147.75.109.163:34766.service - OpenSSH per-connection server daemon (147.75.109.163:34766). Sep 12 17:35:09.776664 sshd[2425]: Accepted publickey for core from 147.75.109.163 port 34766 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:09.777518 sshd[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:09.784373 systemd-logind[2070]: New session 8 of user core. Sep 12 17:35:09.793180 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:09.853140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:09.881329 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:35:09.908061 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:35:09.908469 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:35:09.917428 sudo[2443]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:09.925974 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:35:09.926384 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:35:09.940390 kubelet[2437]: E0912 17:35:09.940314 2437 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:35:09.944940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:35:09.945176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:35:09.958309 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:35:09.961864 auditctl[2449]: No rules Sep 12 17:35:09.962681 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:35:09.963071 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:35:09.972764 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:35:10.012360 augenrules[2468]: No rules Sep 12 17:35:10.015581 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:35:10.026145 sudo[2442]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:10.050137 sshd[2425]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:10.056469 systemd[1]: sshd@7-172.31.16.204:22-147.75.109.163:34766.service: Deactivated successfully. Sep 12 17:35:10.063337 systemd-logind[2070]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:10.064244 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:10.065565 systemd-logind[2070]: Removed session 8. Sep 12 17:35:10.082903 systemd[1]: Started sshd@8-172.31.16.204:22-147.75.109.163:35262.service - OpenSSH per-connection server daemon (147.75.109.163:35262). Sep 12 17:35:10.257670 sshd[2477]: Accepted publickey for core from 147.75.109.163 port 35262 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:35:10.263509 sshd[2477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:10.272156 systemd-logind[2070]: New session 9 of user core. Sep 12 17:35:10.279132 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:10.380051 sudo[2481]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:35:10.380461 sudo[2481]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:35:11.197142 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:35:11.211564 (dockerd)[2496]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:35:11.954561 dockerd[2496]: time="2025-09-12T17:35:11.954497465Z" level=info msg="Starting up" Sep 12 17:35:12.337121 dockerd[2496]: time="2025-09-12T17:35:12.336817125Z" level=info msg="Loading containers: start." Sep 12 17:35:12.474745 kernel: Initializing XFRM netlink socket Sep 12 17:35:12.509778 (udev-worker)[2519]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:35:12.578633 systemd-networkd[1650]: docker0: Link UP Sep 12 17:35:12.614415 dockerd[2496]: time="2025-09-12T17:35:12.613796389Z" level=info msg="Loading containers: done." Sep 12 17:35:12.655610 dockerd[2496]: time="2025-09-12T17:35:12.655449904Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:35:12.655982 dockerd[2496]: time="2025-09-12T17:35:12.655773680Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:35:12.655982 dockerd[2496]: time="2025-09-12T17:35:12.655952371Z" level=info msg="Daemon has completed initialization" Sep 12 17:35:12.721998 dockerd[2496]: time="2025-09-12T17:35:12.721865199Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:35:12.722213 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:35:13.160539 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck980529145-merged.mount: Deactivated successfully. Sep 12 17:35:14.577067 containerd[2102]: time="2025-09-12T17:35:14.577027567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:35:15.189461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532667459.mount: Deactivated successfully. Sep 12 17:35:16.595060 containerd[2102]: time="2025-09-12T17:35:16.595001338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.596980 containerd[2102]: time="2025-09-12T17:35:16.596905439Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:35:16.599623 containerd[2102]: time="2025-09-12T17:35:16.599549146Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.604572 containerd[2102]: time="2025-09-12T17:35:16.604498450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.606756 containerd[2102]: time="2025-09-12T17:35:16.605670654Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.028597602s" Sep 12 17:35:16.606756 containerd[2102]: time="2025-09-12T17:35:16.605746691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:35:16.606756 containerd[2102]: time="2025-09-12T17:35:16.606627072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:35:18.109279 containerd[2102]: time="2025-09-12T17:35:18.109223764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:18.111425 containerd[2102]: time="2025-09-12T17:35:18.111351777Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:35:18.114033 containerd[2102]: time="2025-09-12T17:35:18.113962371Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:18.119079 containerd[2102]: time="2025-09-12T17:35:18.118650974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:18.120091 containerd[2102]: time="2025-09-12T17:35:18.120044348Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.513379326s" Sep 12 17:35:18.120204 containerd[2102]: time="2025-09-12T17:35:18.120097174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:35:18.120610 containerd[2102]: time="2025-09-12T17:35:18.120577713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:35:19.359432 containerd[2102]: time="2025-09-12T17:35:19.359142856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:19.360165 containerd[2102]: time="2025-09-12T17:35:19.360113756Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:35:19.360895 containerd[2102]: time="2025-09-12T17:35:19.360862651Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:19.364875 containerd[2102]: time="2025-09-12T17:35:19.364823042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:19.366631 containerd[2102]: time="2025-09-12T17:35:19.365922086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.24530279s" Sep 12 17:35:19.366631 containerd[2102]: time="2025-09-12T17:35:19.365967465Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:35:19.366631 containerd[2102]: time="2025-09-12T17:35:19.366496720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:35:19.979020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:35:19.988160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:20.282096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:20.293644 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:35:20.384134 kubelet[2718]: E0912 17:35:20.384090 2718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:35:20.387640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:35:20.389547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:35:20.558274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200521132.mount: Deactivated successfully. Sep 12 17:35:21.145889 containerd[2102]: time="2025-09-12T17:35:21.145823248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:21.148017 containerd[2102]: time="2025-09-12T17:35:21.147945720Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:35:21.150255 containerd[2102]: time="2025-09-12T17:35:21.150193128Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:21.153344 containerd[2102]: time="2025-09-12T17:35:21.153275123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:21.154312 containerd[2102]: time="2025-09-12T17:35:21.154048832Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.787511824s" Sep 12 17:35:21.154312 containerd[2102]: time="2025-09-12T17:35:21.154093447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:35:21.154897 containerd[2102]: time="2025-09-12T17:35:21.154710575Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:35:21.725742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034828894.mount: Deactivated successfully. Sep 12 17:35:22.843934 containerd[2102]: time="2025-09-12T17:35:22.843795091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.845651 containerd[2102]: time="2025-09-12T17:35:22.845419990Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:35:22.848761 containerd[2102]: time="2025-09-12T17:35:22.847907000Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.854018 containerd[2102]: time="2025-09-12T17:35:22.852805675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:22.854018 containerd[2102]: time="2025-09-12T17:35:22.853882521Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.699129084s" Sep 12 17:35:22.854018 containerd[2102]: time="2025-09-12T17:35:22.853925771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:35:22.854427 containerd[2102]: time="2025-09-12T17:35:22.854401322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:35:23.341209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414683651.mount: Deactivated successfully. Sep 12 17:35:23.353579 containerd[2102]: time="2025-09-12T17:35:23.353510353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.355914 containerd[2102]: time="2025-09-12T17:35:23.355607194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:35:23.359175 containerd[2102]: time="2025-09-12T17:35:23.357674366Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.362177 containerd[2102]: time="2025-09-12T17:35:23.361327168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.362177 containerd[2102]: time="2025-09-12T17:35:23.361960561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.528032ms" Sep 12 17:35:23.362177 containerd[2102]: time="2025-09-12T17:35:23.361990727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:35:23.362484 containerd[2102]: time="2025-09-12T17:35:23.362447222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:35:23.902561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727440964.mount: Deactivated successfully. Sep 12 17:35:25.652854 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:35:26.216383 containerd[2102]: time="2025-09-12T17:35:26.216315081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.218733 containerd[2102]: time="2025-09-12T17:35:26.218669793Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:35:26.220970 containerd[2102]: time="2025-09-12T17:35:26.220892478Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.230533 containerd[2102]: time="2025-09-12T17:35:26.229854692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.232490 containerd[2102]: time="2025-09-12T17:35:26.232439191Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.869958192s" Sep 12 17:35:26.232624 containerd[2102]: time="2025-09-12T17:35:26.232493162Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:35:29.203986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:29.210083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:29.249079 systemd[1]: Reloading requested from client PID 2872 ('systemctl') (unit session-9.scope)... Sep 12 17:35:29.249100 systemd[1]: Reloading... Sep 12 17:35:29.358749 zram_generator::config[2915]: No configuration found. Sep 12 17:35:29.526530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:35:29.624655 systemd[1]: Reloading finished in 375 ms. Sep 12 17:35:29.666417 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:35:29.666586 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:35:29.668106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:29.672576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:29.913898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:29.917042 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:35:29.963839 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:29.963839 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:35:29.963839 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:29.968330 kubelet[2985]: I0912 17:35:29.968240 2985 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:35:30.314758 kubelet[2985]: I0912 17:35:30.312697 2985 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:35:30.314758 kubelet[2985]: I0912 17:35:30.312807 2985 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:35:30.314758 kubelet[2985]: I0912 17:35:30.313469 2985 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:35:30.350763 kubelet[2985]: I0912 17:35:30.350619 2985 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:35:30.357461 kubelet[2985]: E0912 17:35:30.357410 2985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:30.373989 kubelet[2985]: E0912 17:35:30.373943 2985 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:35:30.373989 kubelet[2985]: I0912 17:35:30.373980 2985 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:35:30.384028 kubelet[2985]: I0912 17:35:30.383986 2985 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:35:30.386850 kubelet[2985]: I0912 17:35:30.386804 2985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:35:30.387071 kubelet[2985]: I0912 17:35:30.387020 2985 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:35:30.388460 kubelet[2985]: I0912 17:35:30.387075 2985 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:35:30.388647 kubelet[2985]: I0912 17:35:30.388472 2985 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:35:30.388647 kubelet[2985]: I0912 17:35:30.388490 2985 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:35:30.388647 kubelet[2985]: I0912 17:35:30.388627 2985 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:30.393562 kubelet[2985]: I0912 17:35:30.393295 2985 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:35:30.393562 kubelet[2985]: I0912 17:35:30.393342 2985 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:35:30.393562 kubelet[2985]: I0912 17:35:30.393377 2985 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:35:30.394334 kubelet[2985]: I0912 17:35:30.394285 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:35:30.402448 kubelet[2985]: W0912 17:35:30.402379 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:30.402612 kubelet[2985]: E0912 17:35:30.402463 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:30.405342 kubelet[2985]: W0912 17:35:30.405278 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:30.405481 kubelet[2985]: E0912 17:35:30.405355 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:30.405481 kubelet[2985]: I0912 17:35:30.405471 2985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:35:30.409905 kubelet[2985]: I0912 17:35:30.409864 2985 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:35:30.410830 kubelet[2985]: W0912 17:35:30.410795 2985 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:35:30.411510 kubelet[2985]: I0912 17:35:30.411484 2985 server.go:1274] "Started kubelet" Sep 12 17:35:30.413972 kubelet[2985]: I0912 17:35:30.413932 2985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:35:30.423957 kubelet[2985]: I0912 17:35:30.422936 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:35:30.423957 kubelet[2985]: I0912 17:35:30.423506 2985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:35:30.424646 kubelet[2985]: I0912 17:35:30.424611 2985 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:35:30.432138 kubelet[2985]: E0912 17:35:30.429826 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.204:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.204:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-204.186499802f66a56a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-204,UID:ip-172-31-16-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-204,},FirstTimestamp:2025-09-12 17:35:30.41145585 +0000 UTC m=+0.490799472,LastTimestamp:2025-09-12 17:35:30.41145585 +0000 UTC m=+0.490799472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-204,}" Sep 12 17:35:30.433709 kubelet[2985]: I0912 17:35:30.433027 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:35:30.435210 kubelet[2985]: I0912 17:35:30.434704 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:35:30.439421 kubelet[2985]: I0912 17:35:30.439381 2985 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:35:30.439584 kubelet[2985]: E0912 17:35:30.439532 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:30.441024 kubelet[2985]: I0912 17:35:30.440229 2985 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:35:30.441024 kubelet[2985]: I0912 17:35:30.440312 2985 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:35:30.441983 kubelet[2985]: W0912 17:35:30.441855 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:30.441983 kubelet[2985]: E0912 17:35:30.441928 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:30.443488 kubelet[2985]: I0912 17:35:30.443458 2985 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:35:30.443679 kubelet[2985]: I0912 17:35:30.443556 2985 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:35:30.446625 kubelet[2985]: E0912 17:35:30.445988 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": dial tcp 172.31.16.204:6443: connect: connection refused" interval="200ms" Sep 12 17:35:30.452810 kubelet[2985]: I0912 17:35:30.452666 2985 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:35:30.482619 kubelet[2985]: I0912 17:35:30.482556 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:35:30.484212 kubelet[2985]: I0912 17:35:30.484180 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:35:30.484212 kubelet[2985]: I0912 17:35:30.484216 2985 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:35:30.484382 kubelet[2985]: I0912 17:35:30.484245 2985 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:35:30.484382 kubelet[2985]: E0912 17:35:30.484292 2985 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:35:30.487063 kubelet[2985]: I0912 17:35:30.486769 2985 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:35:30.487063 kubelet[2985]: I0912 17:35:30.486791 2985 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:35:30.487063 kubelet[2985]: I0912 17:35:30.486810 2985 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:30.492643 kubelet[2985]: I0912 17:35:30.492156 2985 policy_none.go:49] "None policy: Start" Sep 12 17:35:30.492999 kubelet[2985]: W0912 17:35:30.492858 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:30.492999 kubelet[2985]: E0912 17:35:30.492903 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:30.493353 kubelet[2985]: I0912 17:35:30.493320 2985 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:35:30.493353 kubelet[2985]: I0912 17:35:30.493350 2985 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:35:30.500668 kubelet[2985]: I0912 17:35:30.500616 2985 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:35:30.500928 kubelet[2985]: I0912 17:35:30.500895 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:35:30.501015 kubelet[2985]: I0912 17:35:30.500917 2985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:35:30.503045 kubelet[2985]: I0912 17:35:30.502839 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:35:30.507410 kubelet[2985]: E0912 17:35:30.507337 2985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-204\" not found" Sep 12 17:35:30.620026 kubelet[2985]: I0912 17:35:30.614501 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:30.620026 kubelet[2985]: E0912 17:35:30.615704 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.204:6443/api/v1/nodes\": dial tcp 172.31.16.204:6443: connect: connection refused" node="ip-172-31-16-204" Sep 12 17:35:30.645384 kubelet[2985]: I0912 17:35:30.645339 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-ca-certs\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:30.646986 kubelet[2985]: E0912 17:35:30.646937 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": dial tcp 172.31.16.204:6443: connect: connection refused" interval="400ms" Sep 12 17:35:30.745616 kubelet[2985]: I0912 17:35:30.745496 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:30.745774 kubelet[2985]: I0912 17:35:30.745713 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:30.745774 kubelet[2985]: I0912 17:35:30.745765 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d648f1d851075005c592f5ab96b9d57-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-204\" (UID: \"7d648f1d851075005c592f5ab96b9d57\") " pod="kube-system/kube-scheduler-ip-172-31-16-204" Sep 12 17:35:30.745834 kubelet[2985]: I0912 17:35:30.745807 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:30.745834 kubelet[2985]: I0912 17:35:30.745823 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:30.745896 kubelet[2985]: I0912 17:35:30.745838 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:30.745896 kubelet[2985]: I0912 17:35:30.745855 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:30.745896 kubelet[2985]: I0912 17:35:30.745871 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:30.820123 kubelet[2985]: I0912 17:35:30.819561 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:30.820123 kubelet[2985]: E0912 17:35:30.820106 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.204:6443/api/v1/nodes\": dial tcp 172.31.16.204:6443: connect: connection refused" node="ip-172-31-16-204" Sep 12 17:35:30.905838 containerd[2102]: time="2025-09-12T17:35:30.905700564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-204,Uid:e1762c1ea389f60fba09f64af33335a2,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:30.939569 containerd[2102]: time="2025-09-12T17:35:30.939508936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-204,Uid:7d648f1d851075005c592f5ab96b9d57,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:30.939982 containerd[2102]: time="2025-09-12T17:35:30.939948652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-204,Uid:c1eda001a036058f980a79aa15abe472,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:31.047454 kubelet[2985]: E0912 17:35:31.047411 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": dial tcp 172.31.16.204:6443: connect: connection refused" interval="800ms" Sep 12 17:35:31.223133 kubelet[2985]: I0912 17:35:31.222660 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:31.223133 kubelet[2985]: E0912 17:35:31.223029 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.204:6443/api/v1/nodes\": dial tcp 172.31.16.204:6443: connect: connection refused" node="ip-172-31-16-204" Sep 12 17:35:31.328535 kubelet[2985]: W0912 17:35:31.328352 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:31.329294 kubelet[2985]: E0912 17:35:31.328549 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:31.421657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175907388.mount: Deactivated successfully. Sep 12 17:35:31.439840 containerd[2102]: time="2025-09-12T17:35:31.439792440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:31.441524 containerd[2102]: time="2025-09-12T17:35:31.441459142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:35:31.444115 containerd[2102]: time="2025-09-12T17:35:31.444057119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:31.446106 containerd[2102]: time="2025-09-12T17:35:31.446059051Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:31.448514 containerd[2102]: time="2025-09-12T17:35:31.448182474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:35:31.450919 containerd[2102]: time="2025-09-12T17:35:31.450877200Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:31.454456 containerd[2102]: time="2025-09-12T17:35:31.454155783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:35:31.457699 containerd[2102]: time="2025-09-12T17:35:31.457650367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:31.458642 containerd[2102]: time="2025-09-12T17:35:31.458598672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.574287ms" Sep 12 17:35:31.464746 containerd[2102]: time="2025-09-12T17:35:31.463982422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.755776ms" Sep 12 17:35:31.466592 containerd[2102]: time="2025-09-12T17:35:31.466549815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.747501ms" Sep 12 17:35:31.692043 containerd[2102]: time="2025-09-12T17:35:31.691802478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:31.692043 containerd[2102]: time="2025-09-12T17:35:31.691860542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:31.692043 containerd[2102]: time="2025-09-12T17:35:31.691876407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.692043 containerd[2102]: time="2025-09-12T17:35:31.691990813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.693147 containerd[2102]: time="2025-09-12T17:35:31.692855381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:31.693147 containerd[2102]: time="2025-09-12T17:35:31.692935266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:31.693147 containerd[2102]: time="2025-09-12T17:35:31.692960185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.693147 containerd[2102]: time="2025-09-12T17:35:31.693082958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.699338 containerd[2102]: time="2025-09-12T17:35:31.697350580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:31.699338 containerd[2102]: time="2025-09-12T17:35:31.697429241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:31.699338 containerd[2102]: time="2025-09-12T17:35:31.697455458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.699338 containerd[2102]: time="2025-09-12T17:35:31.697585412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:31.766453 kubelet[2985]: W0912 17:35:31.765903 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:31.766788 kubelet[2985]: E0912 17:35:31.766755 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:31.820063 containerd[2102]: time="2025-09-12T17:35:31.820021316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-204,Uid:7d648f1d851075005c592f5ab96b9d57,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0f9ef6022008986fb09a85e0971cc79707f9de3f4cfaff8eb96f4cca4044ce\"" Sep 12 17:35:31.834704 containerd[2102]: time="2025-09-12T17:35:31.834657685Z" level=info msg="CreateContainer within sandbox \"6c0f9ef6022008986fb09a85e0971cc79707f9de3f4cfaff8eb96f4cca4044ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:35:31.843345 containerd[2102]: time="2025-09-12T17:35:31.843301191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-204,Uid:c1eda001a036058f980a79aa15abe472,Namespace:kube-system,Attempt:0,} returns sandbox id \"27125e2f430aa343f1da16287a914ff849796a7797b4f15cf7c1c3c644bb2af7\"" Sep 12 17:35:31.848562 kubelet[2985]: E0912 17:35:31.848517 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": dial tcp 172.31.16.204:6443: connect: connection refused" interval="1.6s" Sep 12 17:35:31.849117 containerd[2102]: time="2025-09-12T17:35:31.849071573Z" level=info msg="CreateContainer within sandbox \"27125e2f430aa343f1da16287a914ff849796a7797b4f15cf7c1c3c644bb2af7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:35:31.855824 containerd[2102]: time="2025-09-12T17:35:31.855756723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-204,Uid:e1762c1ea389f60fba09f64af33335a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70385c8c90c21b0de2b0ba4dd223204ee51f15e825b4ee6c5111f5a92af94d8\"" Sep 12 17:35:31.862258 containerd[2102]: time="2025-09-12T17:35:31.862217764Z" level=info msg="CreateContainer within sandbox \"f70385c8c90c21b0de2b0ba4dd223204ee51f15e825b4ee6c5111f5a92af94d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:35:31.868497 kubelet[2985]: W0912 17:35:31.868429 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:31.868766 kubelet[2985]: E0912 17:35:31.868696 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:31.868967 containerd[2102]: time="2025-09-12T17:35:31.868929049Z" level=info msg="CreateContainer within sandbox \"6c0f9ef6022008986fb09a85e0971cc79707f9de3f4cfaff8eb96f4cca4044ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2\"" Sep 12 17:35:31.878764 containerd[2102]: time="2025-09-12T17:35:31.877994280Z" level=info msg="StartContainer for \"cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2\"" Sep 12 17:35:31.892527 containerd[2102]: time="2025-09-12T17:35:31.892345288Z" level=info msg="CreateContainer within sandbox \"27125e2f430aa343f1da16287a914ff849796a7797b4f15cf7c1c3c644bb2af7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0\"" Sep 12 17:35:31.893207 containerd[2102]: time="2025-09-12T17:35:31.893174597Z" level=info msg="StartContainer for \"01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0\"" Sep 12 17:35:31.912001 containerd[2102]: time="2025-09-12T17:35:31.911940770Z" level=info msg="CreateContainer within sandbox \"f70385c8c90c21b0de2b0ba4dd223204ee51f15e825b4ee6c5111f5a92af94d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43d0f93d0f81e3a0525a289ddc52c5c20254abfc7de0858de8100a2629bc2e2c\"" Sep 12 17:35:31.915370 containerd[2102]: time="2025-09-12T17:35:31.914047575Z" level=info msg="StartContainer for \"43d0f93d0f81e3a0525a289ddc52c5c20254abfc7de0858de8100a2629bc2e2c\"" Sep 12 17:35:31.937931 kubelet[2985]: W0912 17:35:31.937819 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:31.938795 kubelet[2985]: E0912 17:35:31.938136 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:32.028500 kubelet[2985]: I0912 17:35:32.028450 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:32.029362 kubelet[2985]: E0912 17:35:32.029306 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.204:6443/api/v1/nodes\": dial tcp 172.31.16.204:6443: connect: connection refused" node="ip-172-31-16-204" Sep 12 17:35:32.040842 containerd[2102]: time="2025-09-12T17:35:32.039969837Z" level=info msg="StartContainer for \"cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2\" returns successfully" Sep 12 17:35:32.072655 containerd[2102]: time="2025-09-12T17:35:32.071964216Z" level=info msg="StartContainer for \"01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0\" returns successfully" Sep 12 17:35:32.072655 containerd[2102]: time="2025-09-12T17:35:32.072060414Z" level=info msg="StartContainer for \"43d0f93d0f81e3a0525a289ddc52c5c20254abfc7de0858de8100a2629bc2e2c\" returns successfully" Sep 12 17:35:32.536757 kubelet[2985]: E0912 17:35:32.536092 2985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:33.114370 kubelet[2985]: W0912 17:35:33.114312 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:33.114370 kubelet[2985]: E0912 17:35:33.114363 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-204&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:33.449527 kubelet[2985]: E0912 17:35:33.449151 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": dial tcp 172.31.16.204:6443: connect: connection refused" interval="3.2s" Sep 12 17:35:33.623536 kubelet[2985]: W0912 17:35:33.623468 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:33.623536 kubelet[2985]: E0912 17:35:33.623520 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:33.632341 kubelet[2985]: I0912 17:35:33.632235 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:33.632613 kubelet[2985]: E0912 17:35:33.632583 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.204:6443/api/v1/nodes\": dial tcp 172.31.16.204:6443: connect: connection refused" node="ip-172-31-16-204" Sep 12 17:35:34.363163 kubelet[2985]: W0912 17:35:34.363107 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:34.363163 kubelet[2985]: E0912 17:35:34.363165 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:35.117007 kubelet[2985]: W0912 17:35:35.116962 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.204:6443: connect: connection refused Sep 12 17:35:35.117475 kubelet[2985]: E0912 17:35:35.117015 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.204:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:35.354397 kubelet[2985]: E0912 17:35:35.354284 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.204:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.204:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-204.186499802f66a56a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-204,UID:ip-172-31-16-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-204,},FirstTimestamp:2025-09-12 17:35:30.41145585 +0000 UTC m=+0.490799472,LastTimestamp:2025-09-12 17:35:30.41145585 +0000 UTC m=+0.490799472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-204,}" Sep 12 17:35:36.835851 kubelet[2985]: I0912 17:35:36.834705 2985 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:37.373925 kubelet[2985]: E0912 17:35:37.373859 2985 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-204\" not found" node="ip-172-31-16-204" Sep 12 17:35:37.435740 kubelet[2985]: I0912 17:35:37.433668 2985 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-204" Sep 12 17:35:37.435740 kubelet[2985]: E0912 17:35:37.433703 2985 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-204\": node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.456573 kubelet[2985]: E0912 17:35:37.456448 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.557591 kubelet[2985]: E0912 17:35:37.557549 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.658447 kubelet[2985]: E0912 17:35:37.658302 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.759075 kubelet[2985]: E0912 17:35:37.759029 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.859361 kubelet[2985]: E0912 17:35:37.859312 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:37.960008 kubelet[2985]: E0912 17:35:37.959888 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:38.060621 kubelet[2985]: E0912 17:35:38.060573 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:38.161661 kubelet[2985]: E0912 17:35:38.161615 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:38.408394 kubelet[2985]: I0912 17:35:38.408356 2985 apiserver.go:52] "Watching apiserver" Sep 12 17:35:38.441008 kubelet[2985]: I0912 17:35:38.440964 2985 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:35:39.617910 update_engine[2073]: I20250912 17:35:39.617820 2073 update_attempter.cc:509] Updating boot flags... Sep 12 17:35:39.686742 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3269) Sep 12 17:35:39.891943 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3268) Sep 12 17:35:39.905889 systemd[1]: Reloading requested from client PID 3360 ('systemctl') (unit session-9.scope)... Sep 12 17:35:39.905913 systemd[1]: Reloading... Sep 12 17:35:40.151806 zram_generator::config[3490]: No configuration found. Sep 12 17:35:40.169434 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3268) Sep 12 17:35:40.390220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:35:40.490134 systemd[1]: Reloading finished in 583 ms. Sep 12 17:35:40.601357 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:40.624084 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:35:40.624517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:40.635376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:40.982971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:40.984221 (kubelet)[3633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:35:41.067975 kubelet[3633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:41.067975 kubelet[3633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:35:41.067975 kubelet[3633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:41.068526 kubelet[3633]: I0912 17:35:41.068055 3633 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:35:41.077969 kubelet[3633]: I0912 17:35:41.077930 3633 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:35:41.077969 kubelet[3633]: I0912 17:35:41.077961 3633 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:35:41.078280 kubelet[3633]: I0912 17:35:41.078258 3633 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:35:41.080987 kubelet[3633]: I0912 17:35:41.079818 3633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:35:41.093903 kubelet[3633]: I0912 17:35:41.093430 3633 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:35:41.101950 kubelet[3633]: E0912 17:35:41.101897 3633 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:35:41.101950 kubelet[3633]: I0912 17:35:41.101952 3633 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:35:41.113078 kubelet[3633]: I0912 17:35:41.112039 3633 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:35:41.113078 kubelet[3633]: I0912 17:35:41.112562 3633 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:35:41.113078 kubelet[3633]: I0912 17:35:41.112707 3633 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:35:41.115936 kubelet[3633]: I0912 17:35:41.112757 3633 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.119858 3633 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.119907 3633 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.119961 3633 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.120121 3633 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.120140 3633 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.120180 3633 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:35:41.121072 kubelet[3633]: I0912 17:35:41.120193 3633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:35:41.124743 kubelet[3633]: I0912 17:35:41.123132 3633 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:35:41.124743 kubelet[3633]: I0912 17:35:41.123691 3633 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:35:41.124919 kubelet[3633]: I0912 17:35:41.124855 3633 server.go:1274] "Started kubelet" Sep 12 17:35:41.135631 kubelet[3633]: I0912 17:35:41.135408 3633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:35:41.149096 kubelet[3633]: I0912 17:35:41.149034 3633 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:35:41.156904 kubelet[3633]: I0912 17:35:41.156872 3633 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:35:41.168797 kubelet[3633]: I0912 17:35:41.167796 3633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:35:41.168797 kubelet[3633]: I0912 17:35:41.168081 3633 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:35:41.168797 kubelet[3633]: I0912 17:35:41.168427 3633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:35:41.174685 kubelet[3633]: I0912 17:35:41.174656 3633 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:35:41.174913 kubelet[3633]: E0912 17:35:41.174851 3633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-204\" not found" Sep 12 17:35:41.176019 kubelet[3633]: I0912 17:35:41.175992 3633 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:35:41.178319 kubelet[3633]: I0912 17:35:41.178293 3633 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:35:41.180746 kubelet[3633]: I0912 17:35:41.180155 3633 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:35:41.180746 kubelet[3633]: I0912 17:35:41.180305 3633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:35:41.184270 kubelet[3633]: I0912 17:35:41.184246 3633 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:35:41.198734 kubelet[3633]: I0912 17:35:41.198675 3633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:35:41.201125 kubelet[3633]: I0912 17:35:41.201097 3633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:35:41.201600 kubelet[3633]: I0912 17:35:41.201273 3633 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:35:41.201600 kubelet[3633]: I0912 17:35:41.201296 3633 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:35:41.201600 kubelet[3633]: E0912 17:35:41.201334 3633 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:35:41.273236 kubelet[3633]: I0912 17:35:41.273120 3633 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:35:41.273236 kubelet[3633]: I0912 17:35:41.273142 3633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:35:41.273236 kubelet[3633]: I0912 17:35:41.273163 3633 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:41.273454 kubelet[3633]: I0912 17:35:41.273347 3633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:35:41.273454 kubelet[3633]: I0912 17:35:41.273361 3633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:35:41.273454 kubelet[3633]: I0912 17:35:41.273386 3633 policy_none.go:49] "None policy: Start" Sep 12 17:35:41.274270 kubelet[3633]: I0912 17:35:41.274241 3633 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:35:41.274270 kubelet[3633]: I0912 17:35:41.274268 3633 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:35:41.274471 kubelet[3633]: I0912 17:35:41.274453 3633 state_mem.go:75] "Updated machine memory state" Sep 12 17:35:41.276587 kubelet[3633]: I0912 17:35:41.276560 3633 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:35:41.276898 kubelet[3633]: I0912 17:35:41.276782 3633 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:35:41.276898 kubelet[3633]: I0912 17:35:41.276806 3633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:35:41.278123 kubelet[3633]: I0912 17:35:41.278101 3633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:35:41.387059 kubelet[3633]: I0912 17:35:41.386964 3633 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-204" Sep 12 17:35:41.397222 kubelet[3633]: I0912 17:35:41.397170 3633 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-204" Sep 12 17:35:41.397369 kubelet[3633]: I0912 17:35:41.397282 3633 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-204" Sep 12 17:35:41.479438 kubelet[3633]: I0912 17:35:41.479398 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:41.479700 kubelet[3633]: I0912 17:35:41.479592 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d648f1d851075005c592f5ab96b9d57-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-204\" (UID: \"7d648f1d851075005c592f5ab96b9d57\") " pod="kube-system/kube-scheduler-ip-172-31-16-204" Sep 12 17:35:41.479700 kubelet[3633]: I0912 17:35:41.479626 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:41.479700 kubelet[3633]: I0912 17:35:41.479650 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:41.479700 kubelet[3633]: I0912 17:35:41.479673 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:41.479700 kubelet[3633]: I0912 17:35:41.479698 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-ca-certs\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:41.479959 kubelet[3633]: I0912 17:35:41.479743 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1762c1ea389f60fba09f64af33335a2-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-204\" (UID: \"e1762c1ea389f60fba09f64af33335a2\") " pod="kube-system/kube-apiserver-ip-172-31-16-204" Sep 12 17:35:41.479959 kubelet[3633]: I0912 17:35:41.479770 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:41.479959 kubelet[3633]: I0912 17:35:41.479797 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1eda001a036058f980a79aa15abe472-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-204\" (UID: \"c1eda001a036058f980a79aa15abe472\") " pod="kube-system/kube-controller-manager-ip-172-31-16-204" Sep 12 17:35:42.122686 kubelet[3633]: I0912 17:35:42.122637 3633 apiserver.go:52] "Watching apiserver" Sep 12 17:35:42.176446 kubelet[3633]: I0912 17:35:42.176405 3633 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:35:42.242023 kubelet[3633]: E0912 17:35:42.241842 3633 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-204\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-204" Sep 12 17:35:42.274743 kubelet[3633]: I0912 17:35:42.273411 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-204" podStartSLOduration=1.273390673 podStartE2EDuration="1.273390673s" podCreationTimestamp="2025-09-12 17:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:42.262170059 +0000 UTC m=+1.268289688" watchObservedRunningTime="2025-09-12 17:35:42.273390673 +0000 UTC m=+1.279510311" Sep 12 17:35:42.289339 kubelet[3633]: I0912 17:35:42.288940 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-204" podStartSLOduration=1.288877297 podStartE2EDuration="1.288877297s" podCreationTimestamp="2025-09-12 17:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:42.273558238 +0000 UTC m=+1.279677866" watchObservedRunningTime="2025-09-12 17:35:42.288877297 +0000 UTC m=+1.294996907" Sep 12 17:35:42.289339 kubelet[3633]: I0912 17:35:42.289057 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-204" podStartSLOduration=1.289050063 podStartE2EDuration="1.289050063s" podCreationTimestamp="2025-09-12 17:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:42.288613618 +0000 UTC m=+1.294733247" watchObservedRunningTime="2025-09-12 17:35:42.289050063 +0000 UTC m=+1.295169693" Sep 12 17:35:46.061336 kubelet[3633]: I0912 17:35:46.061304 3633 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:35:46.061958 kubelet[3633]: I0912 17:35:46.061807 3633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:35:46.062005 containerd[2102]: time="2025-09-12T17:35:46.061609064Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:35:47.014919 kubelet[3633]: I0912 17:35:47.014860 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7f12185-fdea-40a5-a6c1-43b43c85ba26-xtables-lock\") pod \"kube-proxy-hcmj2\" (UID: \"e7f12185-fdea-40a5-a6c1-43b43c85ba26\") " pod="kube-system/kube-proxy-hcmj2" Sep 12 17:35:47.014919 kubelet[3633]: I0912 17:35:47.014916 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7f12185-fdea-40a5-a6c1-43b43c85ba26-lib-modules\") pod \"kube-proxy-hcmj2\" (UID: \"e7f12185-fdea-40a5-a6c1-43b43c85ba26\") " pod="kube-system/kube-proxy-hcmj2" Sep 12 17:35:47.015335 kubelet[3633]: I0912 17:35:47.014952 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4m5v\" (UniqueName: \"kubernetes.io/projected/e7f12185-fdea-40a5-a6c1-43b43c85ba26-kube-api-access-z4m5v\") pod \"kube-proxy-hcmj2\" (UID: \"e7f12185-fdea-40a5-a6c1-43b43c85ba26\") " pod="kube-system/kube-proxy-hcmj2" Sep 12 17:35:47.015335 kubelet[3633]: I0912 17:35:47.014977 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7f12185-fdea-40a5-a6c1-43b43c85ba26-kube-proxy\") pod \"kube-proxy-hcmj2\" (UID: \"e7f12185-fdea-40a5-a6c1-43b43c85ba26\") " pod="kube-system/kube-proxy-hcmj2" Sep 12 17:35:47.230392 kubelet[3633]: I0912 17:35:47.219324 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9656f212-db02-4370-806e-0de368404ff5-var-lib-calico\") pod \"tigera-operator-58fc44c59b-282p4\" (UID: \"9656f212-db02-4370-806e-0de368404ff5\") " pod="tigera-operator/tigera-operator-58fc44c59b-282p4" Sep 12 17:35:47.238940 kubelet[3633]: I0912 17:35:47.230638 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz2tf\" (UniqueName: \"kubernetes.io/projected/9656f212-db02-4370-806e-0de368404ff5-kube-api-access-gz2tf\") pod \"tigera-operator-58fc44c59b-282p4\" (UID: \"9656f212-db02-4370-806e-0de368404ff5\") " pod="tigera-operator/tigera-operator-58fc44c59b-282p4" Sep 12 17:35:47.259659 containerd[2102]: time="2025-09-12T17:35:47.259403503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcmj2,Uid:e7f12185-fdea-40a5-a6c1-43b43c85ba26,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:47.293304 containerd[2102]: time="2025-09-12T17:35:47.292732508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:47.293304 containerd[2102]: time="2025-09-12T17:35:47.292889397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:47.293304 containerd[2102]: time="2025-09-12T17:35:47.292917305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.293304 containerd[2102]: time="2025-09-12T17:35:47.293087494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.324009 systemd[1]: run-containerd-runc-k8s.io-89f40e3780c8dc45b523570c782c65fa8c62f7840fa1718b10a74e4f77af8502-runc.57kmR2.mount: Deactivated successfully. Sep 12 17:35:47.351923 containerd[2102]: time="2025-09-12T17:35:47.351876786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcmj2,Uid:e7f12185-fdea-40a5-a6c1-43b43c85ba26,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f40e3780c8dc45b523570c782c65fa8c62f7840fa1718b10a74e4f77af8502\"" Sep 12 17:35:47.355699 containerd[2102]: time="2025-09-12T17:35:47.355532976Z" level=info msg="CreateContainer within sandbox \"89f40e3780c8dc45b523570c782c65fa8c62f7840fa1718b10a74e4f77af8502\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:35:47.372210 containerd[2102]: time="2025-09-12T17:35:47.372075318Z" level=info msg="CreateContainer within sandbox \"89f40e3780c8dc45b523570c782c65fa8c62f7840fa1718b10a74e4f77af8502\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edf8ed7cb4fb5e26932ccf98a01164b633d5618d10ece0241b8ff50f1d42854f\"" Sep 12 17:35:47.373519 containerd[2102]: time="2025-09-12T17:35:47.373480291Z" level=info msg="StartContainer for \"edf8ed7cb4fb5e26932ccf98a01164b633d5618d10ece0241b8ff50f1d42854f\"" Sep 12 17:35:47.440223 containerd[2102]: time="2025-09-12T17:35:47.440173889Z" level=info msg="StartContainer for \"edf8ed7cb4fb5e26932ccf98a01164b633d5618d10ece0241b8ff50f1d42854f\" returns successfully" Sep 12 17:35:47.464682 containerd[2102]: time="2025-09-12T17:35:47.464619579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-282p4,Uid:9656f212-db02-4370-806e-0de368404ff5,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:35:47.492969 containerd[2102]: time="2025-09-12T17:35:47.492710224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:47.493256 containerd[2102]: time="2025-09-12T17:35:47.492906876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:47.493256 containerd[2102]: time="2025-09-12T17:35:47.492929981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.493544 containerd[2102]: time="2025-09-12T17:35:47.493170618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:47.568895 containerd[2102]: time="2025-09-12T17:35:47.568753934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-282p4,Uid:9656f212-db02-4370-806e-0de368404ff5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"930cbdc68e65edb07b2d665ae7bcc965a7ac3844977ba553c680f55adb352f34\"" Sep 12 17:35:47.572257 containerd[2102]: time="2025-09-12T17:35:47.572210016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:35:48.543011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981342685.mount: Deactivated successfully. Sep 12 17:35:49.536314 kubelet[3633]: I0912 17:35:49.535432 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcmj2" podStartSLOduration=3.535268029 podStartE2EDuration="3.535268029s" podCreationTimestamp="2025-09-12 17:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:48.272082632 +0000 UTC m=+7.278202262" watchObservedRunningTime="2025-09-12 17:35:49.535268029 +0000 UTC m=+8.541387660" Sep 12 17:35:49.559191 containerd[2102]: time="2025-09-12T17:35:49.559136423Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.560666 containerd[2102]: time="2025-09-12T17:35:49.560470853Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:35:49.561646 containerd[2102]: time="2025-09-12T17:35:49.561590508Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.563742 containerd[2102]: time="2025-09-12T17:35:49.563689979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.564624 containerd[2102]: time="2025-09-12T17:35:49.564468033Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.992213463s" Sep 12 17:35:49.564624 containerd[2102]: time="2025-09-12T17:35:49.564505191Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:35:49.569301 containerd[2102]: time="2025-09-12T17:35:49.569271079Z" level=info msg="CreateContainer within sandbox \"930cbdc68e65edb07b2d665ae7bcc965a7ac3844977ba553c680f55adb352f34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:35:49.593807 containerd[2102]: time="2025-09-12T17:35:49.593624497Z" level=info msg="CreateContainer within sandbox \"930cbdc68e65edb07b2d665ae7bcc965a7ac3844977ba553c680f55adb352f34\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887\"" Sep 12 17:35:49.594467 containerd[2102]: time="2025-09-12T17:35:49.594420497Z" level=info msg="StartContainer for \"e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887\"" Sep 12 17:35:49.667065 containerd[2102]: time="2025-09-12T17:35:49.667021913Z" level=info msg="StartContainer for \"e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887\" returns successfully" Sep 12 17:35:51.348876 kubelet[3633]: I0912 17:35:51.348786 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-282p4" podStartSLOduration=2.343912653 podStartE2EDuration="4.34081871s" podCreationTimestamp="2025-09-12 17:35:47 +0000 UTC" firstStartedPulling="2025-09-12 17:35:47.570885061 +0000 UTC m=+6.577004675" lastFinishedPulling="2025-09-12 17:35:49.567791112 +0000 UTC m=+8.573910732" observedRunningTime="2025-09-12 17:35:50.293376014 +0000 UTC m=+9.299495647" watchObservedRunningTime="2025-09-12 17:35:51.34081871 +0000 UTC m=+10.346938327" Sep 12 17:35:56.865826 sudo[2481]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:56.891048 sshd[2477]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:56.899450 systemd[1]: sshd@8-172.31.16.204:22-147.75.109.163:35262.service: Deactivated successfully. Sep 12 17:35:56.910255 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:56.910610 systemd-logind[2070]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:56.916677 systemd-logind[2070]: Removed session 9. Sep 12 17:36:04.544020 kubelet[3633]: I0912 17:36:04.543975 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83a4e9a4-0a7b-47d5-9dae-08e2374e05b0-typha-certs\") pod \"calico-typha-6cc58f5787-jdpwq\" (UID: \"83a4e9a4-0a7b-47d5-9dae-08e2374e05b0\") " pod="calico-system/calico-typha-6cc58f5787-jdpwq" Sep 12 17:36:04.544697 kubelet[3633]: I0912 17:36:04.544043 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7xh\" (UniqueName: \"kubernetes.io/projected/83a4e9a4-0a7b-47d5-9dae-08e2374e05b0-kube-api-access-qp7xh\") pod \"calico-typha-6cc58f5787-jdpwq\" (UID: \"83a4e9a4-0a7b-47d5-9dae-08e2374e05b0\") " pod="calico-system/calico-typha-6cc58f5787-jdpwq" Sep 12 17:36:04.544697 kubelet[3633]: I0912 17:36:04.544074 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a4e9a4-0a7b-47d5-9dae-08e2374e05b0-tigera-ca-bundle\") pod \"calico-typha-6cc58f5787-jdpwq\" (UID: \"83a4e9a4-0a7b-47d5-9dae-08e2374e05b0\") " pod="calico-system/calico-typha-6cc58f5787-jdpwq" Sep 12 17:36:04.745951 kubelet[3633]: I0912 17:36:04.745893 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-node-certs\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746120 kubelet[3633]: I0912 17:36:04.745964 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6k48\" (UniqueName: \"kubernetes.io/projected/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-kube-api-access-g6k48\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746120 kubelet[3633]: I0912 17:36:04.745990 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-cni-log-dir\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746120 kubelet[3633]: I0912 17:36:04.746012 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-cni-bin-dir\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746120 kubelet[3633]: I0912 17:36:04.746032 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-lib-modules\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746120 kubelet[3633]: I0912 17:36:04.746052 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-policysync\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746387 kubelet[3633]: I0912 17:36:04.746076 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-tigera-ca-bundle\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746387 kubelet[3633]: I0912 17:36:04.746097 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-xtables-lock\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746387 kubelet[3633]: I0912 17:36:04.746127 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-cni-net-dir\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746387 kubelet[3633]: I0912 17:36:04.746158 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-flexvol-driver-host\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746387 kubelet[3633]: I0912 17:36:04.746183 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-var-lib-calico\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.746613 kubelet[3633]: I0912 17:36:04.746217 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa5393e4-f8d6-44dc-b65d-0e65926f7d0d-var-run-calico\") pod \"calico-node-g9nsx\" (UID: \"fa5393e4-f8d6-44dc-b65d-0e65926f7d0d\") " pod="calico-system/calico-node-g9nsx" Sep 12 17:36:04.804345 containerd[2102]: time="2025-09-12T17:36:04.802966408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cc58f5787-jdpwq,Uid:83a4e9a4-0a7b-47d5-9dae-08e2374e05b0,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:04.821713 kubelet[3633]: E0912 17:36:04.821445 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:04.858926 kubelet[3633]: E0912 17:36:04.858886 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.859265 kubelet[3633]: W0912 17:36:04.859120 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.859641 kubelet[3633]: E0912 17:36:04.859402 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.862784 kubelet[3633]: E0912 17:36:04.862689 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.863704 kubelet[3633]: W0912 17:36:04.863106 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.868235 kubelet[3633]: E0912 17:36:04.864892 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.868235 kubelet[3633]: W0912 17:36:04.864914 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.868235 kubelet[3633]: E0912 17:36:04.864959 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.868681 kubelet[3633]: E0912 17:36:04.864817 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.869384 kubelet[3633]: E0912 17:36:04.869356 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.869941 kubelet[3633]: W0912 17:36:04.869601 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.870066 kubelet[3633]: E0912 17:36:04.870051 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.871115 kubelet[3633]: E0912 17:36:04.871099 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.871792 kubelet[3633]: W0912 17:36:04.871697 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.872563 kubelet[3633]: E0912 17:36:04.872061 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.890914 kubelet[3633]: E0912 17:36:04.889153 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.890914 kubelet[3633]: W0912 17:36:04.890328 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.890914 kubelet[3633]: E0912 17:36:04.890367 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.902005 containerd[2102]: time="2025-09-12T17:36:04.898166013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:04.902005 containerd[2102]: time="2025-09-12T17:36:04.901593293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:04.904840 containerd[2102]: time="2025-09-12T17:36:04.902659864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:04.911141 containerd[2102]: time="2025-09-12T17:36:04.910834361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:04.913023 kubelet[3633]: E0912 17:36:04.912958 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.913023 kubelet[3633]: W0912 17:36:04.912979 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.913313 kubelet[3633]: E0912 17:36:04.913101 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.913921 kubelet[3633]: E0912 17:36:04.913903 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.914107 kubelet[3633]: W0912 17:36:04.914021 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.914107 kubelet[3633]: E0912 17:36:04.914049 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.914555 kubelet[3633]: E0912 17:36:04.914542 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.914706 kubelet[3633]: W0912 17:36:04.914629 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.914706 kubelet[3633]: E0912 17:36:04.914652 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.915362 kubelet[3633]: E0912 17:36:04.915062 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.915362 kubelet[3633]: W0912 17:36:04.915075 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.915362 kubelet[3633]: E0912 17:36:04.915094 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.915698 kubelet[3633]: E0912 17:36:04.915578 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.915698 kubelet[3633]: W0912 17:36:04.915591 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.915698 kubelet[3633]: E0912 17:36:04.915606 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.916305 kubelet[3633]: E0912 17:36:04.916167 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.916305 kubelet[3633]: W0912 17:36:04.916183 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.916305 kubelet[3633]: E0912 17:36:04.916199 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.916742 kubelet[3633]: E0912 17:36:04.916649 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.916742 kubelet[3633]: W0912 17:36:04.916663 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.916742 kubelet[3633]: E0912 17:36:04.916680 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.917425 kubelet[3633]: E0912 17:36:04.917304 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.917425 kubelet[3633]: W0912 17:36:04.917319 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.917425 kubelet[3633]: E0912 17:36:04.917333 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.917941 kubelet[3633]: E0912 17:36:04.917813 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.917941 kubelet[3633]: W0912 17:36:04.917829 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.917941 kubelet[3633]: E0912 17:36:04.917843 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.918554 kubelet[3633]: E0912 17:36:04.918263 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.918554 kubelet[3633]: W0912 17:36:04.918274 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.918554 kubelet[3633]: E0912 17:36:04.918289 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.919056 kubelet[3633]: E0912 17:36:04.918823 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.919056 kubelet[3633]: W0912 17:36:04.918863 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.919056 kubelet[3633]: E0912 17:36:04.918879 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.919426 kubelet[3633]: E0912 17:36:04.919270 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.919426 kubelet[3633]: W0912 17:36:04.919281 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.919426 kubelet[3633]: E0912 17:36:04.919295 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.920006 kubelet[3633]: E0912 17:36:04.919787 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.920006 kubelet[3633]: W0912 17:36:04.919801 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.920006 kubelet[3633]: E0912 17:36:04.919816 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.920290 kubelet[3633]: E0912 17:36:04.920186 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.920290 kubelet[3633]: W0912 17:36:04.920199 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.920290 kubelet[3633]: E0912 17:36:04.920212 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.920871 kubelet[3633]: E0912 17:36:04.920578 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.920871 kubelet[3633]: W0912 17:36:04.920591 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.920871 kubelet[3633]: E0912 17:36:04.920605 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.921255 kubelet[3633]: E0912 17:36:04.921099 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.921255 kubelet[3633]: W0912 17:36:04.921132 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.921255 kubelet[3633]: E0912 17:36:04.921146 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.921831 kubelet[3633]: E0912 17:36:04.921569 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.921831 kubelet[3633]: W0912 17:36:04.921593 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.921831 kubelet[3633]: E0912 17:36:04.921606 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.922324 kubelet[3633]: E0912 17:36:04.922077 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.922324 kubelet[3633]: W0912 17:36:04.922091 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.922324 kubelet[3633]: E0912 17:36:04.922105 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.922854 kubelet[3633]: E0912 17:36:04.922575 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.922854 kubelet[3633]: W0912 17:36:04.922590 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.922854 kubelet[3633]: E0912 17:36:04.922615 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.923487 kubelet[3633]: E0912 17:36:04.923222 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.923487 kubelet[3633]: W0912 17:36:04.923237 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.923487 kubelet[3633]: E0912 17:36:04.923256 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.926513 kubelet[3633]: E0912 17:36:04.926307 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.926513 kubelet[3633]: W0912 17:36:04.926348 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.926513 kubelet[3633]: E0912 17:36:04.926371 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.936035 containerd[2102]: time="2025-09-12T17:36:04.935986817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9nsx,Uid:fa5393e4-f8d6-44dc-b65d-0e65926f7d0d,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:04.953246 kubelet[3633]: E0912 17:36:04.953204 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.953246 kubelet[3633]: W0912 17:36:04.953240 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.953458 kubelet[3633]: E0912 17:36:04.953270 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.953458 kubelet[3633]: I0912 17:36:04.953312 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c274700-5d2a-486e-a911-3e7d7162510d-registration-dir\") pod \"csi-node-driver-c8fhr\" (UID: \"4c274700-5d2a-486e-a911-3e7d7162510d\") " pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:04.957747 kubelet[3633]: E0912 17:36:04.956888 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.957747 kubelet[3633]: W0912 17:36:04.956921 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.959677 kubelet[3633]: E0912 17:36:04.958005 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.959677 kubelet[3633]: I0912 17:36:04.958058 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84ch\" (UniqueName: \"kubernetes.io/projected/4c274700-5d2a-486e-a911-3e7d7162510d-kube-api-access-k84ch\") pod \"csi-node-driver-c8fhr\" (UID: \"4c274700-5d2a-486e-a911-3e7d7162510d\") " pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:04.961745 kubelet[3633]: E0912 17:36:04.960901 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.961745 kubelet[3633]: W0912 17:36:04.960928 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.961745 kubelet[3633]: E0912 17:36:04.961242 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.965755 kubelet[3633]: E0912 17:36:04.964917 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.965755 kubelet[3633]: W0912 17:36:04.964946 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.965755 kubelet[3633]: E0912 17:36:04.965225 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.966824 kubelet[3633]: I0912 17:36:04.966776 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c274700-5d2a-486e-a911-3e7d7162510d-kubelet-dir\") pod \"csi-node-driver-c8fhr\" (UID: \"4c274700-5d2a-486e-a911-3e7d7162510d\") " pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:04.969748 kubelet[3633]: E0912 17:36:04.967996 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.969748 kubelet[3633]: W0912 17:36:04.968026 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.969748 kubelet[3633]: E0912 17:36:04.968488 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.973748 kubelet[3633]: E0912 17:36:04.972059 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.973748 kubelet[3633]: W0912 17:36:04.972091 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.973748 kubelet[3633]: E0912 17:36:04.972118 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.977129 kubelet[3633]: E0912 17:36:04.977092 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.977129 kubelet[3633]: W0912 17:36:04.977124 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.977319 kubelet[3633]: E0912 17:36:04.977175 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.985084 kubelet[3633]: I0912 17:36:04.984956 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c274700-5d2a-486e-a911-3e7d7162510d-socket-dir\") pod \"csi-node-driver-c8fhr\" (UID: \"4c274700-5d2a-486e-a911-3e7d7162510d\") " pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:04.985764 kubelet[3633]: E0912 17:36:04.985335 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.985764 kubelet[3633]: W0912 17:36:04.985356 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.985924 kubelet[3633]: E0912 17:36:04.985877 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.987409 kubelet[3633]: E0912 17:36:04.986807 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.987409 kubelet[3633]: W0912 17:36:04.986832 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.987409 kubelet[3633]: E0912 17:36:04.986857 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.988381 kubelet[3633]: E0912 17:36:04.988352 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.988381 kubelet[3633]: W0912 17:36:04.988380 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.989734 kubelet[3633]: E0912 17:36:04.989291 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.989734 kubelet[3633]: E0912 17:36:04.989696 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.989863 kubelet[3633]: W0912 17:36:04.989736 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.989863 kubelet[3633]: E0912 17:36:04.989759 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.993102 kubelet[3633]: E0912 17:36:04.992781 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.993102 kubelet[3633]: W0912 17:36:04.992807 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.993102 kubelet[3633]: E0912 17:36:04.992833 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.995949 kubelet[3633]: E0912 17:36:04.995916 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.995949 kubelet[3633]: W0912 17:36:04.995946 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.996141 kubelet[3633]: E0912 17:36:04.995974 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:04.996900 kubelet[3633]: I0912 17:36:04.996833 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c274700-5d2a-486e-a911-3e7d7162510d-varrun\") pod \"csi-node-driver-c8fhr\" (UID: \"4c274700-5d2a-486e-a911-3e7d7162510d\") " pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:04.998751 kubelet[3633]: E0912 17:36:04.997483 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:04.998751 kubelet[3633]: W0912 17:36:04.998013 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:04.998751 kubelet[3633]: E0912 17:36:04.998046 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.000241 kubelet[3633]: E0912 17:36:05.000209 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.000241 kubelet[3633]: W0912 17:36:05.000240 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.000418 kubelet[3633]: E0912 17:36:05.000271 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.053766 containerd[2102]: time="2025-09-12T17:36:05.053639010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:05.054002 containerd[2102]: time="2025-09-12T17:36:05.053780519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:05.054002 containerd[2102]: time="2025-09-12T17:36:05.053801485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:05.054002 containerd[2102]: time="2025-09-12T17:36:05.053924335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.097891 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.099783 kubelet[3633]: W0912 17:36:05.097922 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.097951 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.098240 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.099783 kubelet[3633]: W0912 17:36:05.098251 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.098267 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.098497 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.099783 kubelet[3633]: W0912 17:36:05.098508 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.098521 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.099783 kubelet[3633]: E0912 17:36:05.098734 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100405 kubelet[3633]: W0912 17:36:05.098752 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.098765 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.098965 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100405 kubelet[3633]: W0912 17:36:05.098976 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.098987 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.099246 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100405 kubelet[3633]: W0912 17:36:05.099258 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.099279 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.100405 kubelet[3633]: E0912 17:36:05.099498 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100405 kubelet[3633]: W0912 17:36:05.099507 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100804 kubelet[3633]: E0912 17:36:05.099531 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.100804 kubelet[3633]: E0912 17:36:05.099784 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100804 kubelet[3633]: W0912 17:36:05.099794 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100804 kubelet[3633]: E0912 17:36:05.099811 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.100804 kubelet[3633]: E0912 17:36:05.100091 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.100804 kubelet[3633]: W0912 17:36:05.100103 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.100804 kubelet[3633]: E0912 17:36:05.100122 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.101102 kubelet[3633]: E0912 17:36:05.100800 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.101102 kubelet[3633]: W0912 17:36:05.100829 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.101475 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.101588 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.102390 kubelet[3633]: W0912 17:36:05.101599 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.101683 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.101933 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.102390 kubelet[3633]: W0912 17:36:05.101944 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.102027 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.102290 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.102390 kubelet[3633]: W0912 17:36:05.102302 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.102390 kubelet[3633]: E0912 17:36:05.102393 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.102626 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.106784 kubelet[3633]: W0912 17:36:05.102639 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.102739 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.102935 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.106784 kubelet[3633]: W0912 17:36:05.102946 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.103026 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.103239 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.106784 kubelet[3633]: W0912 17:36:05.103250 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.103331 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.106784 kubelet[3633]: E0912 17:36:05.103491 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108518 kubelet[3633]: W0912 17:36:05.103504 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.103598 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.103974 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108518 kubelet[3633]: W0912 17:36:05.103986 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.104117 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.104273 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108518 kubelet[3633]: W0912 17:36:05.104285 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.104312 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108518 kubelet[3633]: E0912 17:36:05.104579 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108518 kubelet[3633]: W0912 17:36:05.104594 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.104621 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.105799 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108922 kubelet[3633]: W0912 17:36:05.105813 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.105896 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.106097 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108922 kubelet[3633]: W0912 17:36:05.106109 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.106198 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.106362 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.108922 kubelet[3633]: W0912 17:36:05.106372 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.108922 kubelet[3633]: E0912 17:36:05.106390 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.109337 kubelet[3633]: E0912 17:36:05.106787 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.109337 kubelet[3633]: W0912 17:36:05.106799 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.109337 kubelet[3633]: E0912 17:36:05.106824 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.109337 kubelet[3633]: E0912 17:36:05.107983 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.109337 kubelet[3633]: W0912 17:36:05.107996 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.109337 kubelet[3633]: E0912 17:36:05.108012 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.143122 kubelet[3633]: E0912 17:36:05.143031 3633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:36:05.145862 kubelet[3633]: W0912 17:36:05.144346 3633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:36:05.145862 kubelet[3633]: E0912 17:36:05.145783 3633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:36:05.170036 containerd[2102]: time="2025-09-12T17:36:05.169994153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9nsx,Uid:fa5393e4-f8d6-44dc-b65d-0e65926f7d0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\"" Sep 12 17:36:05.175888 containerd[2102]: time="2025-09-12T17:36:05.175817550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:36:05.332753 containerd[2102]: time="2025-09-12T17:36:05.332676035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cc58f5787-jdpwq,Uid:83a4e9a4-0a7b-47d5-9dae-08e2374e05b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b3bcde3828135e6aa58dbdd32a0210f63ce4f70b190b4eaba40a8a1e28b581e\"" Sep 12 17:36:06.789699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327668105.mount: Deactivated successfully. Sep 12 17:36:06.903834 containerd[2102]: time="2025-09-12T17:36:06.903758367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.906434 containerd[2102]: time="2025-09-12T17:36:06.906289738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 17:36:06.907822 containerd[2102]: time="2025-09-12T17:36:06.907714484Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.911287 containerd[2102]: time="2025-09-12T17:36:06.911246424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:06.913315 containerd[2102]: time="2025-09-12T17:36:06.912440421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.736389496s" Sep 12 17:36:06.913315 containerd[2102]: time="2025-09-12T17:36:06.912490744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:36:06.914543 containerd[2102]: time="2025-09-12T17:36:06.914504075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:36:06.915795 containerd[2102]: time="2025-09-12T17:36:06.915760902Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:36:06.964708 containerd[2102]: time="2025-09-12T17:36:06.964638264Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc\"" Sep 12 17:36:06.965561 containerd[2102]: time="2025-09-12T17:36:06.965499532Z" level=info msg="StartContainer for \"9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc\"" Sep 12 17:36:07.043582 containerd[2102]: time="2025-09-12T17:36:07.043464565Z" level=info msg="StartContainer for \"9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc\" returns successfully" Sep 12 17:36:07.153613 containerd[2102]: time="2025-09-12T17:36:07.123709494Z" level=info msg="shim disconnected" id=9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc namespace=k8s.io Sep 12 17:36:07.153613 containerd[2102]: time="2025-09-12T17:36:07.153609551Z" level=warning msg="cleaning up after shim disconnected" id=9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc namespace=k8s.io Sep 12 17:36:07.153999 containerd[2102]: time="2025-09-12T17:36:07.153631570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:07.202429 kubelet[3633]: E0912 17:36:07.201704 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:07.748071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9279d28c089bf15e315c3d36fdd453c9711510972757914dbd167e877ba8c8bc-rootfs.mount: Deactivated successfully. Sep 12 17:36:09.202700 kubelet[3633]: E0912 17:36:09.201708 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:09.600395 containerd[2102]: time="2025-09-12T17:36:09.600338717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.602677 containerd[2102]: time="2025-09-12T17:36:09.602619494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 17:36:09.605975 containerd[2102]: time="2025-09-12T17:36:09.605910072Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.611554 containerd[2102]: time="2025-09-12T17:36:09.611461013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.612204 containerd[2102]: time="2025-09-12T17:36:09.612050272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.69750243s" Sep 12 17:36:09.612204 containerd[2102]: time="2025-09-12T17:36:09.612100874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:36:09.613582 containerd[2102]: time="2025-09-12T17:36:09.613308414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:36:09.634564 containerd[2102]: time="2025-09-12T17:36:09.634515923Z" level=info msg="CreateContainer within sandbox \"1b3bcde3828135e6aa58dbdd32a0210f63ce4f70b190b4eaba40a8a1e28b581e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:36:09.698006 containerd[2102]: time="2025-09-12T17:36:09.697836119Z" level=info msg="CreateContainer within sandbox \"1b3bcde3828135e6aa58dbdd32a0210f63ce4f70b190b4eaba40a8a1e28b581e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5f5248e1c4a82649092fe4a29c8d23909251e594361e675c3e13e6bde044cb52\"" Sep 12 17:36:09.699765 containerd[2102]: time="2025-09-12T17:36:09.698523815Z" level=info msg="StartContainer for \"5f5248e1c4a82649092fe4a29c8d23909251e594361e675c3e13e6bde044cb52\"" Sep 12 17:36:09.807751 containerd[2102]: time="2025-09-12T17:36:09.807679878Z" level=info msg="StartContainer for \"5f5248e1c4a82649092fe4a29c8d23909251e594361e675c3e13e6bde044cb52\" returns successfully" Sep 12 17:36:11.208201 kubelet[3633]: E0912 17:36:11.206126 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:11.524164 kubelet[3633]: I0912 17:36:11.524031 3633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:13.202759 kubelet[3633]: E0912 17:36:13.202471 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:13.686509 containerd[2102]: time="2025-09-12T17:36:13.686460875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.688440 containerd[2102]: time="2025-09-12T17:36:13.688367972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:36:13.690634 containerd[2102]: time="2025-09-12T17:36:13.690574155Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.695356 containerd[2102]: time="2025-09-12T17:36:13.694161604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:13.695356 containerd[2102]: time="2025-09-12T17:36:13.695208924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.081712814s" Sep 12 17:36:13.695356 containerd[2102]: time="2025-09-12T17:36:13.695248773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:36:13.699712 containerd[2102]: time="2025-09-12T17:36:13.699671170Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:36:13.707644 kubelet[3633]: I0912 17:36:13.707594 3633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:13.733188 kubelet[3633]: I0912 17:36:13.732056 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cc58f5787-jdpwq" podStartSLOduration=5.455647578 podStartE2EDuration="9.73203471s" podCreationTimestamp="2025-09-12 17:36:04 +0000 UTC" firstStartedPulling="2025-09-12 17:36:05.3366724 +0000 UTC m=+24.342792023" lastFinishedPulling="2025-09-12 17:36:09.613059547 +0000 UTC m=+28.619179155" observedRunningTime="2025-09-12 17:36:10.517103546 +0000 UTC m=+29.523223175" watchObservedRunningTime="2025-09-12 17:36:13.73203471 +0000 UTC m=+32.738154334" Sep 12 17:36:13.733851 containerd[2102]: time="2025-09-12T17:36:13.733808329Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5\"" Sep 12 17:36:13.734702 containerd[2102]: time="2025-09-12T17:36:13.734668110Z" level=info msg="StartContainer for \"c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5\"" Sep 12 17:36:13.856972 containerd[2102]: time="2025-09-12T17:36:13.856927668Z" level=info msg="StartContainer for \"c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5\" returns successfully" Sep 12 17:36:14.957191 containerd[2102]: time="2025-09-12T17:36:14.957100070Z" level=info msg="shim disconnected" id=c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5 namespace=k8s.io Sep 12 17:36:14.957191 containerd[2102]: time="2025-09-12T17:36:14.957187376Z" level=warning msg="cleaning up after shim disconnected" id=c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5 namespace=k8s.io Sep 12 17:36:14.958262 containerd[2102]: time="2025-09-12T17:36:14.957200475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:14.957668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9161bb5691fb2b04e74eb110c0ce07051a16950ebff4536d1a1a731b7c7c5a5-rootfs.mount: Deactivated successfully. Sep 12 17:36:15.025591 kubelet[3633]: I0912 17:36:15.025558 3633 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:36:15.178224 kubelet[3633]: I0912 17:36:15.177514 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t8n\" (UniqueName: \"kubernetes.io/projected/f5cdb510-d3ed-48d3-9fb7-62a04476b44d-kube-api-access-p4t8n\") pod \"coredns-7c65d6cfc9-x9v4c\" (UID: \"f5cdb510-d3ed-48d3-9fb7-62a04476b44d\") " pod="kube-system/coredns-7c65d6cfc9-x9v4c" Sep 12 17:36:15.178224 kubelet[3633]: I0912 17:36:15.177581 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx27j\" (UniqueName: \"kubernetes.io/projected/75effd7d-c738-4b1f-a43c-f81ac2da3610-kube-api-access-hx27j\") pod \"calico-kube-controllers-699d545876-frxkl\" (UID: \"75effd7d-c738-4b1f-a43c-f81ac2da3610\") " pod="calico-system/calico-kube-controllers-699d545876-frxkl" Sep 12 17:36:15.178224 kubelet[3633]: I0912 17:36:15.177615 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d68a7ce9-62a6-403c-81fa-26b71803f67f-config-volume\") pod \"coredns-7c65d6cfc9-fx5v5\" (UID: \"d68a7ce9-62a6-403c-81fa-26b71803f67f\") " pod="kube-system/coredns-7c65d6cfc9-fx5v5" Sep 12 17:36:15.178224 kubelet[3633]: I0912 17:36:15.177642 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s88mb\" (UniqueName: \"kubernetes.io/projected/d68a7ce9-62a6-403c-81fa-26b71803f67f-kube-api-access-s88mb\") pod \"coredns-7c65d6cfc9-fx5v5\" (UID: \"d68a7ce9-62a6-403c-81fa-26b71803f67f\") " pod="kube-system/coredns-7c65d6cfc9-fx5v5" Sep 12 17:36:15.178224 kubelet[3633]: I0912 17:36:15.177670 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be594a54-0fca-4de7-bded-7c1589b44a49-calico-apiserver-certs\") pod \"calico-apiserver-7948647f84-7bpbq\" (UID: \"be594a54-0fca-4de7-bded-7c1589b44a49\") " pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" Sep 12 17:36:15.178756 kubelet[3633]: I0912 17:36:15.177697 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjjt\" (UniqueName: \"kubernetes.io/projected/cc21aef5-fbe0-49ed-bfa6-99bf18c52532-kube-api-access-xwjjt\") pod \"goldmane-7988f88666-2hmns\" (UID: \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\") " pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.178756 kubelet[3633]: I0912 17:36:15.177738 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc21aef5-fbe0-49ed-bfa6-99bf18c52532-goldmane-ca-bundle\") pod \"goldmane-7988f88666-2hmns\" (UID: \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\") " pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.178756 kubelet[3633]: I0912 17:36:15.177769 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2be750b9-c275-4533-bf66-976d561de541-calico-apiserver-certs\") pod \"calico-apiserver-7948647f84-ct4ql\" (UID: \"2be750b9-c275-4533-bf66-976d561de541\") " pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" Sep 12 17:36:15.178756 kubelet[3633]: I0912 17:36:15.177799 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efea65c0-aa46-423b-a81d-6268432c863c-whisker-backend-key-pair\") pod \"whisker-779f76fdb-m4z9n\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " pod="calico-system/whisker-779f76fdb-m4z9n" Sep 12 17:36:15.178756 kubelet[3633]: I0912 17:36:15.177833 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqd9\" (UniqueName: \"kubernetes.io/projected/efea65c0-aa46-423b-a81d-6268432c863c-kube-api-access-qgqd9\") pod \"whisker-779f76fdb-m4z9n\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " pod="calico-system/whisker-779f76fdb-m4z9n" Sep 12 17:36:15.179214 kubelet[3633]: I0912 17:36:15.177863 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klh2\" (UniqueName: \"kubernetes.io/projected/be594a54-0fca-4de7-bded-7c1589b44a49-kube-api-access-5klh2\") pod \"calico-apiserver-7948647f84-7bpbq\" (UID: \"be594a54-0fca-4de7-bded-7c1589b44a49\") " pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" Sep 12 17:36:15.179214 kubelet[3633]: I0912 17:36:15.177893 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxzn\" (UniqueName: \"kubernetes.io/projected/2be750b9-c275-4533-bf66-976d561de541-kube-api-access-lfxzn\") pod \"calico-apiserver-7948647f84-ct4ql\" (UID: \"2be750b9-c275-4533-bf66-976d561de541\") " pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" Sep 12 17:36:15.179214 kubelet[3633]: I0912 17:36:15.177916 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5cdb510-d3ed-48d3-9fb7-62a04476b44d-config-volume\") pod \"coredns-7c65d6cfc9-x9v4c\" (UID: \"f5cdb510-d3ed-48d3-9fb7-62a04476b44d\") " pod="kube-system/coredns-7c65d6cfc9-x9v4c" Sep 12 17:36:15.179214 kubelet[3633]: I0912 17:36:15.177940 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75effd7d-c738-4b1f-a43c-f81ac2da3610-tigera-ca-bundle\") pod \"calico-kube-controllers-699d545876-frxkl\" (UID: \"75effd7d-c738-4b1f-a43c-f81ac2da3610\") " pod="calico-system/calico-kube-controllers-699d545876-frxkl" Sep 12 17:36:15.179214 kubelet[3633]: I0912 17:36:15.177964 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc21aef5-fbe0-49ed-bfa6-99bf18c52532-config\") pod \"goldmane-7988f88666-2hmns\" (UID: \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\") " pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.179362 kubelet[3633]: I0912 17:36:15.177986 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc21aef5-fbe0-49ed-bfa6-99bf18c52532-goldmane-key-pair\") pod \"goldmane-7988f88666-2hmns\" (UID: \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\") " pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.179362 kubelet[3633]: I0912 17:36:15.178011 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efea65c0-aa46-423b-a81d-6268432c863c-whisker-ca-bundle\") pod \"whisker-779f76fdb-m4z9n\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " pod="calico-system/whisker-779f76fdb-m4z9n" Sep 12 17:36:15.220809 containerd[2102]: time="2025-09-12T17:36:15.220485779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8fhr,Uid:4c274700-5d2a-486e-a911-3e7d7162510d,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:15.378112 containerd[2102]: time="2025-09-12T17:36:15.378068858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9v4c,Uid:f5cdb510-d3ed-48d3-9fb7-62a04476b44d,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:15.418710 containerd[2102]: time="2025-09-12T17:36:15.418662834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d545876-frxkl,Uid:75effd7d-c738-4b1f-a43c-f81ac2da3610,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:15.420239 containerd[2102]: time="2025-09-12T17:36:15.420011009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2hmns,Uid:cc21aef5-fbe0-49ed-bfa6-99bf18c52532,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:15.420734 containerd[2102]: time="2025-09-12T17:36:15.420682235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fx5v5,Uid:d68a7ce9-62a6-403c-81fa-26b71803f67f,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:15.421340 containerd[2102]: time="2025-09-12T17:36:15.421308253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-7bpbq,Uid:be594a54-0fca-4de7-bded-7c1589b44a49,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:36:15.429154 containerd[2102]: time="2025-09-12T17:36:15.428511427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779f76fdb-m4z9n,Uid:efea65c0-aa46-423b-a81d-6268432c863c,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:15.433968 containerd[2102]: time="2025-09-12T17:36:15.433922407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-ct4ql,Uid:2be750b9-c275-4533-bf66-976d561de541,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:36:15.549563 containerd[2102]: time="2025-09-12T17:36:15.549110874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:36:15.591224 containerd[2102]: time="2025-09-12T17:36:15.590845034Z" level=error msg="Failed to destroy network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.639285 containerd[2102]: time="2025-09-12T17:36:15.639100584Z" level=error msg="encountered an error cleaning up failed sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.744461 containerd[2102]: time="2025-09-12T17:36:15.743990986Z" level=error msg="Failed to destroy network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.744617 containerd[2102]: time="2025-09-12T17:36:15.744463980Z" level=error msg="encountered an error cleaning up failed sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.744617 containerd[2102]: time="2025-09-12T17:36:15.744564130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9v4c,Uid:f5cdb510-d3ed-48d3-9fb7-62a04476b44d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.752210 containerd[2102]: time="2025-09-12T17:36:15.752153196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8fhr,Uid:4c274700-5d2a-486e-a911-3e7d7162510d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.757105 kubelet[3633]: E0912 17:36:15.757044 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.757308 kubelet[3633]: E0912 17:36:15.757121 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:15.757308 kubelet[3633]: E0912 17:36:15.757149 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c8fhr" Sep 12 17:36:15.758975 kubelet[3633]: E0912 17:36:15.757334 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c8fhr_calico-system(4c274700-5d2a-486e-a911-3e7d7162510d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c8fhr_calico-system(4c274700-5d2a-486e-a911-3e7d7162510d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:15.758975 kubelet[3633]: E0912 17:36:15.757044 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.758975 kubelet[3633]: E0912 17:36:15.757427 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x9v4c" Sep 12 17:36:15.759537 kubelet[3633]: E0912 17:36:15.757450 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x9v4c" Sep 12 17:36:15.759537 kubelet[3633]: E0912 17:36:15.757486 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x9v4c_kube-system(f5cdb510-d3ed-48d3-9fb7-62a04476b44d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x9v4c_kube-system(f5cdb510-d3ed-48d3-9fb7-62a04476b44d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x9v4c" podUID="f5cdb510-d3ed-48d3-9fb7-62a04476b44d" Sep 12 17:36:15.913830 containerd[2102]: time="2025-09-12T17:36:15.913490888Z" level=error msg="Failed to destroy network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.915386 containerd[2102]: time="2025-09-12T17:36:15.914764559Z" level=error msg="encountered an error cleaning up failed sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.915386 containerd[2102]: time="2025-09-12T17:36:15.914847025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-ct4ql,Uid:2be750b9-c275-4533-bf66-976d561de541,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.917505 kubelet[3633]: E0912 17:36:15.916188 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.917505 kubelet[3633]: E0912 17:36:15.916278 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" Sep 12 17:36:15.917505 kubelet[3633]: E0912 17:36:15.916305 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" Sep 12 17:36:15.919498 kubelet[3633]: E0912 17:36:15.916377 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7948647f84-ct4ql_calico-apiserver(2be750b9-c275-4533-bf66-976d561de541)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7948647f84-ct4ql_calico-apiserver(2be750b9-c275-4533-bf66-976d561de541)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" podUID="2be750b9-c275-4533-bf66-976d561de541" Sep 12 17:36:15.919631 containerd[2102]: time="2025-09-12T17:36:15.917908233Z" level=error msg="Failed to destroy network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.920070 containerd[2102]: time="2025-09-12T17:36:15.919853754Z" level=error msg="Failed to destroy network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.920745 containerd[2102]: time="2025-09-12T17:36:15.920523613Z" level=error msg="encountered an error cleaning up failed sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.920745 containerd[2102]: time="2025-09-12T17:36:15.920586516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2hmns,Uid:cc21aef5-fbe0-49ed-bfa6-99bf18c52532,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.921543 kubelet[3633]: E0912 17:36:15.921325 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.921543 kubelet[3633]: E0912 17:36:15.921387 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.921543 kubelet[3633]: E0912 17:36:15.921414 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2hmns" Sep 12 17:36:15.922386 kubelet[3633]: E0912 17:36:15.922194 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.922386 kubelet[3633]: E0912 17:36:15.922241 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779f76fdb-m4z9n" Sep 12 17:36:15.922386 kubelet[3633]: E0912 17:36:15.922265 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779f76fdb-m4z9n" Sep 12 17:36:15.922560 containerd[2102]: time="2025-09-12T17:36:15.921826637Z" level=error msg="encountered an error cleaning up failed sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.922560 containerd[2102]: time="2025-09-12T17:36:15.921903658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779f76fdb-m4z9n,Uid:efea65c0-aa46-423b-a81d-6268432c863c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.922656 kubelet[3633]: E0912 17:36:15.922305 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-779f76fdb-m4z9n_calico-system(efea65c0-aa46-423b-a81d-6268432c863c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-779f76fdb-m4z9n_calico-system(efea65c0-aa46-423b-a81d-6268432c863c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779f76fdb-m4z9n" podUID="efea65c0-aa46-423b-a81d-6268432c863c" Sep 12 17:36:15.923948 kubelet[3633]: E0912 17:36:15.921827 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-2hmns_calico-system(cc21aef5-fbe0-49ed-bfa6-99bf18c52532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-2hmns_calico-system(cc21aef5-fbe0-49ed-bfa6-99bf18c52532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-2hmns" podUID="cc21aef5-fbe0-49ed-bfa6-99bf18c52532" Sep 12 17:36:15.929453 containerd[2102]: time="2025-09-12T17:36:15.929336722Z" level=error msg="Failed to destroy network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.929810 containerd[2102]: time="2025-09-12T17:36:15.929697526Z" level=error msg="encountered an error cleaning up failed sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.930683 containerd[2102]: time="2025-09-12T17:36:15.930521455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fx5v5,Uid:d68a7ce9-62a6-403c-81fa-26b71803f67f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.930916 kubelet[3633]: E0912 17:36:15.930849 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.931016 kubelet[3633]: E0912 17:36:15.930925 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fx5v5" Sep 12 17:36:15.931016 kubelet[3633]: E0912 17:36:15.930950 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fx5v5" Sep 12 17:36:15.931234 kubelet[3633]: E0912 17:36:15.931001 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fx5v5_kube-system(d68a7ce9-62a6-403c-81fa-26b71803f67f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fx5v5_kube-system(d68a7ce9-62a6-403c-81fa-26b71803f67f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fx5v5" podUID="d68a7ce9-62a6-403c-81fa-26b71803f67f" Sep 12 17:36:15.938156 containerd[2102]: time="2025-09-12T17:36:15.938104963Z" level=error msg="Failed to destroy network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.939188 containerd[2102]: time="2025-09-12T17:36:15.938934740Z" level=error msg="encountered an error cleaning up failed sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.939188 containerd[2102]: time="2025-09-12T17:36:15.939121910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d545876-frxkl,Uid:75effd7d-c738-4b1f-a43c-f81ac2da3610,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.940805 kubelet[3633]: E0912 17:36:15.939707 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.940805 kubelet[3633]: E0912 17:36:15.939803 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699d545876-frxkl" Sep 12 17:36:15.940805 kubelet[3633]: E0912 17:36:15.939833 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699d545876-frxkl" Sep 12 17:36:15.941044 kubelet[3633]: E0912 17:36:15.939882 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699d545876-frxkl_calico-system(75effd7d-c738-4b1f-a43c-f81ac2da3610)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699d545876-frxkl_calico-system(75effd7d-c738-4b1f-a43c-f81ac2da3610)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699d545876-frxkl" podUID="75effd7d-c738-4b1f-a43c-f81ac2da3610" Sep 12 17:36:15.964475 containerd[2102]: time="2025-09-12T17:36:15.964075517Z" level=error msg="Failed to destroy network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.965356 containerd[2102]: time="2025-09-12T17:36:15.964947546Z" level=error msg="encountered an error cleaning up failed sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.965741 containerd[2102]: time="2025-09-12T17:36:15.965471887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-7bpbq,Uid:be594a54-0fca-4de7-bded-7c1589b44a49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.966096 kubelet[3633]: E0912 17:36:15.966051 3633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:15.966433 kubelet[3633]: E0912 17:36:15.966235 3633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" Sep 12 17:36:15.966433 kubelet[3633]: E0912 17:36:15.966297 3633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" Sep 12 17:36:15.966433 kubelet[3633]: E0912 17:36:15.966371 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7948647f84-7bpbq_calico-apiserver(be594a54-0fca-4de7-bded-7c1589b44a49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7948647f84-7bpbq_calico-apiserver(be594a54-0fca-4de7-bded-7c1589b44a49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" podUID="be594a54-0fca-4de7-bded-7c1589b44a49" Sep 12 17:36:15.988420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a-shm.mount: Deactivated successfully. Sep 12 17:36:15.998742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85-shm.mount: Deactivated successfully. Sep 12 17:36:16.546288 kubelet[3633]: I0912 17:36:16.546255 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:16.550709 kubelet[3633]: I0912 17:36:16.550061 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:16.582501 kubelet[3633]: I0912 17:36:16.582479 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:16.584206 kubelet[3633]: I0912 17:36:16.584163 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:16.588064 kubelet[3633]: I0912 17:36:16.588036 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:16.595496 kubelet[3633]: I0912 17:36:16.594755 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:16.597016 kubelet[3633]: I0912 17:36:16.596990 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:16.598907 kubelet[3633]: I0912 17:36:16.598872 3633 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:16.619543 containerd[2102]: time="2025-09-12T17:36:16.619474435Z" level=info msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" Sep 12 17:36:16.623192 containerd[2102]: time="2025-09-12T17:36:16.620639327Z" level=info msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" Sep 12 17:36:16.623192 containerd[2102]: time="2025-09-12T17:36:16.621733081Z" level=info msg="Ensure that sandbox c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a in task-service has been cleanup successfully" Sep 12 17:36:16.623192 containerd[2102]: time="2025-09-12T17:36:16.621836115Z" level=info msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" Sep 12 17:36:16.623192 containerd[2102]: time="2025-09-12T17:36:16.622072726Z" level=info msg="Ensure that sandbox 33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288 in task-service has been cleanup successfully" Sep 12 17:36:16.624844 containerd[2102]: time="2025-09-12T17:36:16.621745192Z" level=info msg="Ensure that sandbox ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd in task-service has been cleanup successfully" Sep 12 17:36:16.625579 containerd[2102]: time="2025-09-12T17:36:16.624996859Z" level=info msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" Sep 12 17:36:16.625579 containerd[2102]: time="2025-09-12T17:36:16.625040840Z" level=info msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" Sep 12 17:36:16.625579 containerd[2102]: time="2025-09-12T17:36:16.625198126Z" level=info msg="Ensure that sandbox 4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85 in task-service has been cleanup successfully" Sep 12 17:36:16.625579 containerd[2102]: time="2025-09-12T17:36:16.625401751Z" level=info msg="Ensure that sandbox 7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a in task-service has been cleanup successfully" Sep 12 17:36:16.626908 containerd[2102]: time="2025-09-12T17:36:16.626867891Z" level=info msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" Sep 12 17:36:16.627217 containerd[2102]: time="2025-09-12T17:36:16.627187970Z" level=info msg="Ensure that sandbox 18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45 in task-service has been cleanup successfully" Sep 12 17:36:16.628637 containerd[2102]: time="2025-09-12T17:36:16.628161950Z" level=info msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" Sep 12 17:36:16.628637 containerd[2102]: time="2025-09-12T17:36:16.625006415Z" level=info msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" Sep 12 17:36:16.628637 containerd[2102]: time="2025-09-12T17:36:16.628343735Z" level=info msg="Ensure that sandbox be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e in task-service has been cleanup successfully" Sep 12 17:36:16.628637 containerd[2102]: time="2025-09-12T17:36:16.628372995Z" level=info msg="Ensure that sandbox 59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0 in task-service has been cleanup successfully" Sep 12 17:36:16.815594 containerd[2102]: time="2025-09-12T17:36:16.815464738Z" level=error msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" failed" error="failed to destroy network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.815778 kubelet[3633]: E0912 17:36:16.815701 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:16.833159 containerd[2102]: time="2025-09-12T17:36:16.832565766Z" level=error msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" failed" error="failed to destroy network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.833315 kubelet[3633]: E0912 17:36:16.832845 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:16.840422 kubelet[3633]: E0912 17:36:16.827887 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0"} Sep 12 17:36:16.840422 kubelet[3633]: E0912 17:36:16.832904 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a"} Sep 12 17:36:16.840422 kubelet[3633]: E0912 17:36:16.840338 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d68a7ce9-62a6-403c-81fa-26b71803f67f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.840422 kubelet[3633]: E0912 17:36:16.840367 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c274700-5d2a-486e-a911-3e7d7162510d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.840422 kubelet[3633]: E0912 17:36:16.840376 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d68a7ce9-62a6-403c-81fa-26b71803f67f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fx5v5" podUID="d68a7ce9-62a6-403c-81fa-26b71803f67f" Sep 12 17:36:16.840924 kubelet[3633]: E0912 17:36:16.840395 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c274700-5d2a-486e-a911-3e7d7162510d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c8fhr" podUID="4c274700-5d2a-486e-a911-3e7d7162510d" Sep 12 17:36:16.845790 containerd[2102]: time="2025-09-12T17:36:16.845736274Z" level=error msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" failed" error="failed to destroy network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.846552 kubelet[3633]: E0912 17:36:16.846333 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:16.846552 kubelet[3633]: E0912 17:36:16.846394 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a"} Sep 12 17:36:16.846552 kubelet[3633]: E0912 17:36:16.846439 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.846552 kubelet[3633]: E0912 17:36:16.846471 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc21aef5-fbe0-49ed-bfa6-99bf18c52532\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-2hmns" podUID="cc21aef5-fbe0-49ed-bfa6-99bf18c52532" Sep 12 17:36:16.869145 containerd[2102]: time="2025-09-12T17:36:16.869073854Z" level=error msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" failed" error="failed to destroy network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.869741 kubelet[3633]: E0912 17:36:16.869664 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:16.869846 kubelet[3633]: E0912 17:36:16.869768 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288"} Sep 12 17:36:16.869846 kubelet[3633]: E0912 17:36:16.869813 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5cdb510-d3ed-48d3-9fb7-62a04476b44d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.869991 kubelet[3633]: E0912 17:36:16.869842 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5cdb510-d3ed-48d3-9fb7-62a04476b44d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x9v4c" podUID="f5cdb510-d3ed-48d3-9fb7-62a04476b44d" Sep 12 17:36:16.872907 containerd[2102]: time="2025-09-12T17:36:16.872852628Z" level=error msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" failed" error="failed to destroy network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.874109 kubelet[3633]: E0912 17:36:16.873921 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:16.874109 kubelet[3633]: E0912 17:36:16.873983 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85"} Sep 12 17:36:16.874109 kubelet[3633]: E0912 17:36:16.874027 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be594a54-0fca-4de7-bded-7c1589b44a49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.874109 kubelet[3633]: E0912 17:36:16.874068 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be594a54-0fca-4de7-bded-7c1589b44a49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" podUID="be594a54-0fca-4de7-bded-7c1589b44a49" Sep 12 17:36:16.877608 containerd[2102]: time="2025-09-12T17:36:16.877560267Z" level=error msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" failed" error="failed to destroy network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.877915 kubelet[3633]: E0912 17:36:16.877815 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:16.877915 kubelet[3633]: E0912 17:36:16.877879 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e"} Sep 12 17:36:16.878068 kubelet[3633]: E0912 17:36:16.877925 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efea65c0-aa46-423b-a81d-6268432c863c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.878068 kubelet[3633]: E0912 17:36:16.877955 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efea65c0-aa46-423b-a81d-6268432c863c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779f76fdb-m4z9n" podUID="efea65c0-aa46-423b-a81d-6268432c863c" Sep 12 17:36:16.882365 containerd[2102]: time="2025-09-12T17:36:16.881850450Z" level=error msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" failed" error="failed to destroy network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.882573 kubelet[3633]: E0912 17:36:16.882185 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:16.882573 kubelet[3633]: E0912 17:36:16.882241 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45"} Sep 12 17:36:16.882573 kubelet[3633]: E0912 17:36:16.882285 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75effd7d-c738-4b1f-a43c-f81ac2da3610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.882573 kubelet[3633]: E0912 17:36:16.882326 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75effd7d-c738-4b1f-a43c-f81ac2da3610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699d545876-frxkl" podUID="75effd7d-c738-4b1f-a43c-f81ac2da3610" Sep 12 17:36:16.884421 containerd[2102]: time="2025-09-12T17:36:16.884352784Z" level=error msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" failed" error="failed to destroy network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:16.884742 kubelet[3633]: E0912 17:36:16.884663 3633 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:16.884844 kubelet[3633]: E0912 17:36:16.884762 3633 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd"} Sep 12 17:36:16.884844 kubelet[3633]: E0912 17:36:16.884809 3633 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2be750b9-c275-4533-bf66-976d561de541\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:16.884980 kubelet[3633]: E0912 17:36:16.884878 3633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2be750b9-c275-4533-bf66-976d561de541\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" podUID="2be750b9-c275-4533-bf66-976d561de541" Sep 12 17:36:21.504809 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:21.508823 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:21.504902 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:23.554111 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:23.553016 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:23.553153 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:23.749444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117765707.mount: Deactivated successfully. Sep 12 17:36:23.834702 containerd[2102]: time="2025-09-12T17:36:23.826061034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.836501 containerd[2102]: time="2025-09-12T17:36:23.836445401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:36:23.837030 containerd[2102]: time="2025-09-12T17:36:23.836903421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.287745544s" Sep 12 17:36:23.837030 containerd[2102]: time="2025-09-12T17:36:23.836940122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:36:23.879452 containerd[2102]: time="2025-09-12T17:36:23.877864696Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.879452 containerd[2102]: time="2025-09-12T17:36:23.878649702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:23.932491 containerd[2102]: time="2025-09-12T17:36:23.932363014Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:36:23.974397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197936699.mount: Deactivated successfully. Sep 12 17:36:23.995429 containerd[2102]: time="2025-09-12T17:36:23.995365233Z" level=info msg="CreateContainer within sandbox \"563301c736719f130ec7a72ca724ae6872a6ecf5afdabb730d2724dfc6610e00\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05bd44a016d54c77e0170d00d23323f3dbf9466568847faff92f2d7e62c1044c\"" Sep 12 17:36:23.998779 containerd[2102]: time="2025-09-12T17:36:23.997653929Z" level=info msg="StartContainer for \"05bd44a016d54c77e0170d00d23323f3dbf9466568847faff92f2d7e62c1044c\"" Sep 12 17:36:24.131676 containerd[2102]: time="2025-09-12T17:36:24.131244096Z" level=info msg="StartContainer for \"05bd44a016d54c77e0170d00d23323f3dbf9466568847faff92f2d7e62c1044c\" returns successfully" Sep 12 17:36:24.284092 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:36:24.285635 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:36:24.597065 containerd[2102]: time="2025-09-12T17:36:24.595169097Z" level=info msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" Sep 12 17:36:24.723741 kubelet[3633]: I0912 17:36:24.697768 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g9nsx" podStartSLOduration=1.971335678 podStartE2EDuration="20.675955288s" podCreationTimestamp="2025-09-12 17:36:04 +0000 UTC" firstStartedPulling="2025-09-12 17:36:05.173672971 +0000 UTC m=+24.179792610" lastFinishedPulling="2025-09-12 17:36:23.878292603 +0000 UTC m=+42.884412220" observedRunningTime="2025-09-12 17:36:24.672348334 +0000 UTC m=+43.678467976" watchObservedRunningTime="2025-09-12 17:36:24.675955288 +0000 UTC m=+43.682074923" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.783 [INFO][4785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.784 [INFO][4785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" iface="eth0" netns="/var/run/netns/cni-7243ada2-d6d5-5d6a-f120-be689aab80a2" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.785 [INFO][4785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" iface="eth0" netns="/var/run/netns/cni-7243ada2-d6d5-5d6a-f120-be689aab80a2" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.786 [INFO][4785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" iface="eth0" netns="/var/run/netns/cni-7243ada2-d6d5-5d6a-f120-be689aab80a2" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.786 [INFO][4785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:24.786 [INFO][4785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.104 [INFO][4803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.111 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.111 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.129 [WARNING][4803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.129 [INFO][4803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.131 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:25.137745 containerd[2102]: 2025-09-12 17:36:25.133 [INFO][4785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:25.142215 containerd[2102]: time="2025-09-12T17:36:25.138482563Z" level=info msg="TearDown network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" successfully" Sep 12 17:36:25.142215 containerd[2102]: time="2025-09-12T17:36:25.138521178Z" level=info msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" returns successfully" Sep 12 17:36:25.142565 systemd[1]: run-netns-cni\x2d7243ada2\x2dd6d5\x2d5d6a\x2df120\x2dbe689aab80a2.mount: Deactivated successfully. Sep 12 17:36:25.271791 kubelet[3633]: I0912 17:36:25.271745 3633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efea65c0-aa46-423b-a81d-6268432c863c-whisker-backend-key-pair\") pod \"efea65c0-aa46-423b-a81d-6268432c863c\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " Sep 12 17:36:25.276488 kubelet[3633]: I0912 17:36:25.275861 3633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efea65c0-aa46-423b-a81d-6268432c863c-whisker-ca-bundle\") pod \"efea65c0-aa46-423b-a81d-6268432c863c\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " Sep 12 17:36:25.276488 kubelet[3633]: I0912 17:36:25.275937 3633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqd9\" (UniqueName: \"kubernetes.io/projected/efea65c0-aa46-423b-a81d-6268432c863c-kube-api-access-qgqd9\") pod \"efea65c0-aa46-423b-a81d-6268432c863c\" (UID: \"efea65c0-aa46-423b-a81d-6268432c863c\") " Sep 12 17:36:25.305750 kubelet[3633]: I0912 17:36:25.303557 3633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efea65c0-aa46-423b-a81d-6268432c863c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "efea65c0-aa46-423b-a81d-6268432c863c" (UID: "efea65c0-aa46-423b-a81d-6268432c863c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:36:25.307850 kubelet[3633]: I0912 17:36:25.307012 3633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efea65c0-aa46-423b-a81d-6268432c863c-kube-api-access-qgqd9" (OuterVolumeSpecName: "kube-api-access-qgqd9") pod "efea65c0-aa46-423b-a81d-6268432c863c" (UID: "efea65c0-aa46-423b-a81d-6268432c863c"). InnerVolumeSpecName "kube-api-access-qgqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:36:25.311090 systemd[1]: var-lib-kubelet-pods-efea65c0\x2daa46\x2d423b\x2da81d\x2d6268432c863c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqgqd9.mount: Deactivated successfully. Sep 12 17:36:25.313625 kubelet[3633]: I0912 17:36:25.313097 3633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efea65c0-aa46-423b-a81d-6268432c863c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "efea65c0-aa46-423b-a81d-6268432c863c" (UID: "efea65c0-aa46-423b-a81d-6268432c863c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:36:25.317076 systemd[1]: var-lib-kubelet-pods-efea65c0\x2daa46\x2d423b\x2da81d\x2d6268432c863c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:36:25.377344 kubelet[3633]: I0912 17:36:25.377281 3633 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efea65c0-aa46-423b-a81d-6268432c863c-whisker-backend-key-pair\") on node \"ip-172-31-16-204\" DevicePath \"\"" Sep 12 17:36:25.377344 kubelet[3633]: I0912 17:36:25.377340 3633 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efea65c0-aa46-423b-a81d-6268432c863c-whisker-ca-bundle\") on node \"ip-172-31-16-204\" DevicePath \"\"" Sep 12 17:36:25.377344 kubelet[3633]: I0912 17:36:25.377355 3633 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgqd9\" (UniqueName: \"kubernetes.io/projected/efea65c0-aa46-423b-a81d-6268432c863c-kube-api-access-qgqd9\") on node \"ip-172-31-16-204\" DevicePath \"\"" Sep 12 17:36:25.891090 kubelet[3633]: I0912 17:36:25.890997 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad3eed3a-6174-4365-9576-b46bcdb2bfc0-whisker-backend-key-pair\") pod \"whisker-6cc6686689-66mtx\" (UID: \"ad3eed3a-6174-4365-9576-b46bcdb2bfc0\") " pod="calico-system/whisker-6cc6686689-66mtx" Sep 12 17:36:25.891643 kubelet[3633]: I0912 17:36:25.891130 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad3eed3a-6174-4365-9576-b46bcdb2bfc0-whisker-ca-bundle\") pod \"whisker-6cc6686689-66mtx\" (UID: \"ad3eed3a-6174-4365-9576-b46bcdb2bfc0\") " pod="calico-system/whisker-6cc6686689-66mtx" Sep 12 17:36:25.891643 kubelet[3633]: I0912 17:36:25.891218 3633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrjp\" (UniqueName: \"kubernetes.io/projected/ad3eed3a-6174-4365-9576-b46bcdb2bfc0-kube-api-access-vgrjp\") pod \"whisker-6cc6686689-66mtx\" (UID: \"ad3eed3a-6174-4365-9576-b46bcdb2bfc0\") " pod="calico-system/whisker-6cc6686689-66mtx" Sep 12 17:36:26.104112 containerd[2102]: time="2025-09-12T17:36:26.102497182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc6686689-66mtx,Uid:ad3eed3a-6174-4365-9576-b46bcdb2bfc0,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:26.513695 (udev-worker)[4762]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:36:26.529660 systemd-networkd[1650]: calica7affe24fc: Link UP Sep 12 17:36:26.530500 systemd-networkd[1650]: calica7affe24fc: Gained carrier Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.221 [INFO][4922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.246 [INFO][4922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0 whisker-6cc6686689- calico-system ad3eed3a-6174-4365-9576-b46bcdb2bfc0 889 0 2025-09-12 17:36:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cc6686689 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-204 whisker-6cc6686689-66mtx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica7affe24fc [] [] }} ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.246 [INFO][4922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.387 [INFO][4932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" HandleID="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Workload="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.388 [INFO][4932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" HandleID="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Workload="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-204", "pod":"whisker-6cc6686689-66mtx", "timestamp":"2025-09-12 17:36:26.387350684 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.388 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.388 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.389 [INFO][4932] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.416 [INFO][4932] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.427 [INFO][4932] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.434 [INFO][4932] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.436 [INFO][4932] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.439 [INFO][4932] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.439 [INFO][4932] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.441 [INFO][4932] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2 Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.446 [INFO][4932] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.462 [INFO][4932] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.1/26] block=192.168.48.0/26 handle="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.462 [INFO][4932] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.1/26] handle="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" host="ip-172-31-16-204" Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.462 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:26.574705 containerd[2102]: 2025-09-12 17:36:26.462 [INFO][4932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.1/26] IPv6=[] ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" HandleID="k8s-pod-network.34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Workload="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.469 [INFO][4922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0", GenerateName:"whisker-6cc6686689-", Namespace:"calico-system", SelfLink:"", UID:"ad3eed3a-6174-4365-9576-b46bcdb2bfc0", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc6686689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"whisker-6cc6686689-66mtx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica7affe24fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.470 [INFO][4922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.1/32] ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.471 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica7affe24fc ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.528 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.533 [INFO][4922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0", GenerateName:"whisker-6cc6686689-", Namespace:"calico-system", SelfLink:"", UID:"ad3eed3a-6174-4365-9576-b46bcdb2bfc0", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc6686689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2", Pod:"whisker-6cc6686689-66mtx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica7affe24fc", MAC:"96:58:2d:27:ab:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:26.581030 containerd[2102]: 2025-09-12 17:36:26.553 [INFO][4922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2" Namespace="calico-system" Pod="whisker-6cc6686689-66mtx" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--6cc6686689--66mtx-eth0" Sep 12 17:36:26.641298 containerd[2102]: time="2025-09-12T17:36:26.640098220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:26.641298 containerd[2102]: time="2025-09-12T17:36:26.640252157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:26.641298 containerd[2102]: time="2025-09-12T17:36:26.640278244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:26.684075 containerd[2102]: time="2025-09-12T17:36:26.652443643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:26.884290 containerd[2102]: time="2025-09-12T17:36:26.884225458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc6686689-66mtx,Uid:ad3eed3a-6174-4365-9576-b46bcdb2bfc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2\"" Sep 12 17:36:26.895189 containerd[2102]: time="2025-09-12T17:36:26.895142622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:36:26.921807 kernel: bpftool[5042]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:36:27.211137 systemd-networkd[1650]: vxlan.calico: Link UP Sep 12 17:36:27.212189 systemd-networkd[1650]: vxlan.calico: Gained carrier Sep 12 17:36:27.224496 kubelet[3633]: I0912 17:36:27.224441 3633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efea65c0-aa46-423b-a81d-6268432c863c" path="/var/lib/kubelet/pods/efea65c0-aa46-423b-a81d-6268432c863c/volumes" Sep 12 17:36:27.251794 (udev-worker)[4761]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:36:27.459257 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:27.456921 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:27.456947 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:28.204651 containerd[2102]: time="2025-09-12T17:36:28.204585237Z" level=info msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" Sep 12 17:36:28.214154 containerd[2102]: time="2025-09-12T17:36:28.214109090Z" level=info msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" Sep 12 17:36:28.226476 systemd-networkd[1650]: calica7affe24fc: Gained IPv6LL Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.330 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.330 [INFO][5131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" iface="eth0" netns="/var/run/netns/cni-4575ae31-4a03-7a10-d9c3-e4d437d24189" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.331 [INFO][5131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" iface="eth0" netns="/var/run/netns/cni-4575ae31-4a03-7a10-d9c3-e4d437d24189" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.336 [INFO][5131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" iface="eth0" netns="/var/run/netns/cni-4575ae31-4a03-7a10-d9c3-e4d437d24189" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.336 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.336 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.500 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.507 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.507 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.545 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.545 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.550 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:28.573113 containerd[2102]: 2025-09-12 17:36:28.567 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:28.599673 containerd[2102]: time="2025-09-12T17:36:28.599628868Z" level=info msg="TearDown network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" successfully" Sep 12 17:36:28.599903 containerd[2102]: time="2025-09-12T17:36:28.599883605Z" level=info msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" returns successfully" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.360 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.361 [INFO][5141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" iface="eth0" netns="/var/run/netns/cni-04cbf3ad-9fc7-171e-d917-ebec576d4f45" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.361 [INFO][5141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" iface="eth0" netns="/var/run/netns/cni-04cbf3ad-9fc7-171e-d917-ebec576d4f45" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.362 [INFO][5141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" iface="eth0" netns="/var/run/netns/cni-04cbf3ad-9fc7-171e-d917-ebec576d4f45" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.363 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.363 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.546 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.546 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.552 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.580 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.580 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.587 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:28.607268 containerd[2102]: 2025-09-12 17:36:28.594 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:28.613825 containerd[2102]: time="2025-09-12T17:36:28.610439630Z" level=info msg="TearDown network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" successfully" Sep 12 17:36:28.613825 containerd[2102]: time="2025-09-12T17:36:28.612610778Z" level=info msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" returns successfully" Sep 12 17:36:28.613835 systemd[1]: run-netns-cni\x2d4575ae31\x2d4a03\x2d7a10\x2dd9c3\x2de4d437d24189.mount: Deactivated successfully. Sep 12 17:36:28.621739 containerd[2102]: time="2025-09-12T17:36:28.619973754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d545876-frxkl,Uid:75effd7d-c738-4b1f-a43c-f81ac2da3610,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:28.623870 containerd[2102]: time="2025-09-12T17:36:28.623635271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9v4c,Uid:f5cdb510-d3ed-48d3-9fb7-62a04476b44d,Namespace:kube-system,Attempt:1,}" Sep 12 17:36:28.627412 systemd[1]: run-netns-cni\x2d04cbf3ad\x2d9fc7\x2d171e\x2dd917\x2debec576d4f45.mount: Deactivated successfully. Sep 12 17:36:28.778143 containerd[2102]: time="2025-09-12T17:36:28.778097022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:28.780562 containerd[2102]: time="2025-09-12T17:36:28.780309056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:36:28.782658 containerd[2102]: time="2025-09-12T17:36:28.782615395Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:28.792420 containerd[2102]: time="2025-09-12T17:36:28.792346812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:28.795615 containerd[2102]: time="2025-09-12T17:36:28.795455922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.900015431s" Sep 12 17:36:28.795920 containerd[2102]: time="2025-09-12T17:36:28.795504722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:36:28.803185 containerd[2102]: time="2025-09-12T17:36:28.803039301Z" level=info msg="CreateContainer within sandbox \"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:36:28.843177 containerd[2102]: time="2025-09-12T17:36:28.841737773Z" level=info msg="CreateContainer within sandbox \"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3e05c7ca43be7908eb95bea97ec44160a062f6b2cc4fe631befd3b19fbc64281\"" Sep 12 17:36:28.846238 containerd[2102]: time="2025-09-12T17:36:28.846075741Z" level=info msg="StartContainer for \"3e05c7ca43be7908eb95bea97ec44160a062f6b2cc4fe631befd3b19fbc64281\"" Sep 12 17:36:28.980099 systemd-networkd[1650]: cali21f5537f251: Link UP Sep 12 17:36:28.981783 systemd-networkd[1650]: cali21f5537f251: Gained carrier Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.789 [INFO][5167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0 calico-kube-controllers-699d545876- calico-system 75effd7d-c738-4b1f-a43c-f81ac2da3610 903 0 2025-09-12 17:36:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699d545876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-204 calico-kube-controllers-699d545876-frxkl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21f5537f251 [] [] }} ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.790 [INFO][5167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.860 [INFO][5188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" HandleID="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.861 [INFO][5188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" HandleID="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-204", "pod":"calico-kube-controllers-699d545876-frxkl", "timestamp":"2025-09-12 17:36:28.860559592 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.861 [INFO][5188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.861 [INFO][5188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.861 [INFO][5188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.874 [INFO][5188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.891 [INFO][5188] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.904 [INFO][5188] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.910 [INFO][5188] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.916 [INFO][5188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.916 [INFO][5188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.921 [INFO][5188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9 Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.936 [INFO][5188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.949 [INFO][5188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.2/26] block=192.168.48.0/26 handle="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.949 [INFO][5188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.2/26] handle="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" host="ip-172-31-16-204" Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.949 [INFO][5188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:29.032074 containerd[2102]: 2025-09-12 17:36:28.950 [INFO][5188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.2/26] IPv6=[] ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" HandleID="k8s-pod-network.68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:28.966 [INFO][5167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0", GenerateName:"calico-kube-controllers-699d545876-", Namespace:"calico-system", SelfLink:"", UID:"75effd7d-c738-4b1f-a43c-f81ac2da3610", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d545876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"calico-kube-controllers-699d545876-frxkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21f5537f251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:28.967 [INFO][5167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.2/32] ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:28.967 [INFO][5167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21f5537f251 ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:28.976 [INFO][5167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:28.978 [INFO][5167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0", GenerateName:"calico-kube-controllers-699d545876-", Namespace:"calico-system", SelfLink:"", UID:"75effd7d-c738-4b1f-a43c-f81ac2da3610", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d545876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9", Pod:"calico-kube-controllers-699d545876-frxkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21f5537f251", MAC:"ee:34:41:6a:10:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:29.037653 containerd[2102]: 2025-09-12 17:36:29.005 [INFO][5167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9" Namespace="calico-system" Pod="calico-kube-controllers-699d545876-frxkl" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:29.113320 containerd[2102]: time="2025-09-12T17:36:29.112584106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:29.113320 containerd[2102]: time="2025-09-12T17:36:29.112671515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:29.113320 containerd[2102]: time="2025-09-12T17:36:29.112693245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:29.114466 containerd[2102]: time="2025-09-12T17:36:29.112884340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:29.130527 systemd-networkd[1650]: calied58e862fe7: Link UP Sep 12 17:36:29.134094 systemd-networkd[1650]: calied58e862fe7: Gained carrier Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.840 [INFO][5176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0 coredns-7c65d6cfc9- kube-system f5cdb510-d3ed-48d3-9fb7-62a04476b44d 904 0 2025-09-12 17:35:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-204 coredns-7c65d6cfc9-x9v4c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied58e862fe7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.840 [INFO][5176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.986 [INFO][5198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" HandleID="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.986 [INFO][5198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" HandleID="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cd50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-204", "pod":"coredns-7c65d6cfc9-x9v4c", "timestamp":"2025-09-12 17:36:28.986510335 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.987 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.987 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:28.987 [INFO][5198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.014 [INFO][5198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.036 [INFO][5198] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.047 [INFO][5198] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.049 [INFO][5198] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.059 [INFO][5198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.060 [INFO][5198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.069 [INFO][5198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83 Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.098 [INFO][5198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.120 [INFO][5198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.3/26] block=192.168.48.0/26 handle="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.120 [INFO][5198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.3/26] handle="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" host="ip-172-31-16-204" Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.120 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:29.179709 containerd[2102]: 2025-09-12 17:36:29.120 [INFO][5198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.3/26] IPv6=[] ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" HandleID="k8s-pod-network.16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.125 [INFO][5176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f5cdb510-d3ed-48d3-9fb7-62a04476b44d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"coredns-7c65d6cfc9-x9v4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied58e862fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.125 [INFO][5176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.3/32] ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.125 [INFO][5176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied58e862fe7 ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.135 [INFO][5176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.143 [INFO][5176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f5cdb510-d3ed-48d3-9fb7-62a04476b44d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83", Pod:"coredns-7c65d6cfc9-x9v4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied58e862fe7", MAC:"0a:80:f1:ed:01:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:29.180871 containerd[2102]: 2025-09-12 17:36:29.168 [INFO][5176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9v4c" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:29.214771 containerd[2102]: time="2025-09-12T17:36:29.214690310Z" level=info msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" Sep 12 17:36:29.242337 containerd[2102]: time="2025-09-12T17:36:29.242145772Z" level=info msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" Sep 12 17:36:29.251852 systemd-networkd[1650]: vxlan.calico: Gained IPv6LL Sep 12 17:36:29.280200 containerd[2102]: time="2025-09-12T17:36:29.279256383Z" level=info msg="StartContainer for \"3e05c7ca43be7908eb95bea97ec44160a062f6b2cc4fe631befd3b19fbc64281\" returns successfully" Sep 12 17:36:29.291774 containerd[2102]: time="2025-09-12T17:36:29.290779950Z" level=info msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" Sep 12 17:36:29.301133 containerd[2102]: time="2025-09-12T17:36:29.275414172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:29.301133 containerd[2102]: time="2025-09-12T17:36:29.275489871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:29.301133 containerd[2102]: time="2025-09-12T17:36:29.275513020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:29.301133 containerd[2102]: time="2025-09-12T17:36:29.275640379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:29.313641 containerd[2102]: time="2025-09-12T17:36:29.309150819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:36:29.499216 containerd[2102]: time="2025-09-12T17:36:29.498469651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d545876-frxkl,Uid:75effd7d-c738-4b1f-a43c-f81ac2da3610,Namespace:calico-system,Attempt:1,} returns sandbox id \"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9\"" Sep 12 17:36:29.509335 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:29.505775 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:29.505813 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:29.553546 containerd[2102]: time="2025-09-12T17:36:29.553500911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9v4c,Uid:f5cdb510-d3ed-48d3-9fb7-62a04476b44d,Namespace:kube-system,Attempt:1,} returns sandbox id \"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83\"" Sep 12 17:36:29.561933 containerd[2102]: time="2025-09-12T17:36:29.561875809Z" level=info msg="CreateContainer within sandbox \"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.419 [INFO][5332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.420 [INFO][5332] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" iface="eth0" netns="/var/run/netns/cni-cfe760b7-04f6-c0c6-2047-5738cbf55627" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.423 [INFO][5332] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" iface="eth0" netns="/var/run/netns/cni-cfe760b7-04f6-c0c6-2047-5738cbf55627" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.437 [INFO][5332] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" iface="eth0" netns="/var/run/netns/cni-cfe760b7-04f6-c0c6-2047-5738cbf55627" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.437 [INFO][5332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.437 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.535 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.535 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.535 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.546 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.547 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.550 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:29.571622 containerd[2102]: 2025-09-12 17:36:29.559 [INFO][5332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:29.573051 containerd[2102]: time="2025-09-12T17:36:29.572224187Z" level=info msg="TearDown network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" successfully" Sep 12 17:36:29.573051 containerd[2102]: time="2025-09-12T17:36:29.572371752Z" level=info msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" returns successfully" Sep 12 17:36:29.575998 containerd[2102]: time="2025-09-12T17:36:29.574876162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-ct4ql,Uid:2be750b9-c275-4533-bf66-976d561de541,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:36:29.621651 systemd[1]: run-netns-cni\x2dcfe760b7\x2d04f6\x2dc0c6\x2d2047\x2d5738cbf55627.mount: Deactivated successfully. Sep 12 17:36:29.662900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772593465.mount: Deactivated successfully. Sep 12 17:36:29.688479 containerd[2102]: time="2025-09-12T17:36:29.688436564Z" level=info msg="CreateContainer within sandbox \"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb6610f834cda7b87b128211d901d97fc1397dfaa7ec7c03aac9f1d208dec67d\"" Sep 12 17:36:29.691291 containerd[2102]: time="2025-09-12T17:36:29.691037355Z" level=info msg="StartContainer for \"bb6610f834cda7b87b128211d901d97fc1397dfaa7ec7c03aac9f1d208dec67d\"" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.606 [INFO][5351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.615 [INFO][5351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" iface="eth0" netns="/var/run/netns/cni-cf15eb2e-7971-1b1f-396e-0a01fd9846c9" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.618 [INFO][5351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" iface="eth0" netns="/var/run/netns/cni-cf15eb2e-7971-1b1f-396e-0a01fd9846c9" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.619 [INFO][5351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" iface="eth0" netns="/var/run/netns/cni-cf15eb2e-7971-1b1f-396e-0a01fd9846c9" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.619 [INFO][5351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.619 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.740 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.742 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.743 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.753 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.755 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.757 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:29.768225 containerd[2102]: 2025-09-12 17:36:29.765 [INFO][5351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:29.768225 containerd[2102]: time="2025-09-12T17:36:29.767858900Z" level=info msg="TearDown network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" successfully" Sep 12 17:36:29.768225 containerd[2102]: time="2025-09-12T17:36:29.767924269Z" level=info msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" returns successfully" Sep 12 17:36:29.773466 containerd[2102]: time="2025-09-12T17:36:29.773393413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8fhr,Uid:4c274700-5d2a-486e-a911-3e7d7162510d,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.621 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.621 [INFO][5355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" iface="eth0" netns="/var/run/netns/cni-521537d3-a0d9-52f3-c281-e46907fd26bb" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.626 [INFO][5355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" iface="eth0" netns="/var/run/netns/cni-521537d3-a0d9-52f3-c281-e46907fd26bb" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.632 [INFO][5355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" iface="eth0" netns="/var/run/netns/cni-521537d3-a0d9-52f3-c281-e46907fd26bb" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.633 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.633 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.759 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.759 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.759 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.776 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.776 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.781 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:29.795852 containerd[2102]: 2025-09-12 17:36:29.788 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:29.799563 containerd[2102]: time="2025-09-12T17:36:29.799171585Z" level=info msg="TearDown network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" successfully" Sep 12 17:36:29.799563 containerd[2102]: time="2025-09-12T17:36:29.799214675Z" level=info msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" returns successfully" Sep 12 17:36:29.823589 containerd[2102]: time="2025-09-12T17:36:29.822630827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-7bpbq,Uid:be594a54-0fca-4de7-bded-7c1589b44a49,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:36:29.868203 containerd[2102]: time="2025-09-12T17:36:29.868155868Z" level=info msg="StartContainer for \"bb6610f834cda7b87b128211d901d97fc1397dfaa7ec7c03aac9f1d208dec67d\" returns successfully" Sep 12 17:36:30.090566 systemd-networkd[1650]: cali263a3d7d0bb: Link UP Sep 12 17:36:30.092953 systemd-networkd[1650]: cali263a3d7d0bb: Gained carrier Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.820 [INFO][5404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0 calico-apiserver-7948647f84- calico-apiserver 2be750b9-c275-4533-bf66-976d561de541 921 0 2025-09-12 17:35:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7948647f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-204 calico-apiserver-7948647f84-ct4ql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali263a3d7d0bb [] [] }} ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.821 [INFO][5404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.943 [INFO][5451] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" HandleID="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.962 [INFO][5451] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" HandleID="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-204", "pod":"calico-apiserver-7948647f84-ct4ql", "timestamp":"2025-09-12 17:36:29.942772191 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.964 [INFO][5451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.965 [INFO][5451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.966 [INFO][5451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:29.985 [INFO][5451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.015 [INFO][5451] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.034 [INFO][5451] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.040 [INFO][5451] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.049 [INFO][5451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.049 [INFO][5451] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.052 [INFO][5451] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.060 [INFO][5451] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.074 [INFO][5451] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.4/26] block=192.168.48.0/26 handle="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.074 [INFO][5451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.4/26] handle="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" host="ip-172-31-16-204" Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.075 [INFO][5451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:30.132804 containerd[2102]: 2025-09-12 17:36:30.075 [INFO][5451] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.4/26] IPv6=[] ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" HandleID="k8s-pod-network.f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.083 [INFO][5404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be750b9-c275-4533-bf66-976d561de541", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"calico-apiserver-7948647f84-ct4ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali263a3d7d0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.083 [INFO][5404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.4/32] ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.084 [INFO][5404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali263a3d7d0bb ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.095 [INFO][5404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.096 [INFO][5404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be750b9-c275-4533-bf66-976d561de541", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db", Pod:"calico-apiserver-7948647f84-ct4ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali263a3d7d0bb", MAC:"62:c5:87:22:13:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.133739 containerd[2102]: 2025-09-12 17:36:30.126 [INFO][5404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-ct4ql" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:30.204471 containerd[2102]: time="2025-09-12T17:36:30.204429123Z" level=info msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" Sep 12 17:36:30.208029 containerd[2102]: time="2025-09-12T17:36:30.207703370Z" level=info msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" Sep 12 17:36:30.209380 systemd-networkd[1650]: calied58e862fe7: Gained IPv6LL Sep 12 17:36:30.228902 systemd-networkd[1650]: calib993a07d602: Link UP Sep 12 17:36:30.232778 systemd-networkd[1650]: calib993a07d602: Gained carrier Sep 12 17:36:30.274938 containerd[2102]: time="2025-09-12T17:36:30.274661422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:30.280548 containerd[2102]: time="2025-09-12T17:36:30.277805564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:30.280548 containerd[2102]: time="2025-09-12T17:36:30.277840537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.280548 containerd[2102]: time="2025-09-12T17:36:30.277985220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:29.934 [INFO][5469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0 calico-apiserver-7948647f84- calico-apiserver be594a54-0fca-4de7-bded-7c1589b44a49 926 0 2025-09-12 17:35:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7948647f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-204 calico-apiserver-7948647f84-7bpbq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib993a07d602 [] [] }} ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:29.934 [INFO][5469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.047 [INFO][5488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" HandleID="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.048 [INFO][5488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" HandleID="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-204", "pod":"calico-apiserver-7948647f84-7bpbq", "timestamp":"2025-09-12 17:36:30.047747043 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.048 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.075 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.075 [INFO][5488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.097 [INFO][5488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.111 [INFO][5488] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.132 [INFO][5488] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.136 [INFO][5488] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.140 [INFO][5488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.140 [INFO][5488] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.144 [INFO][5488] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2 Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.158 [INFO][5488] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.174 [INFO][5488] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.5/26] block=192.168.48.0/26 handle="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.175 [INFO][5488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.5/26] handle="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" host="ip-172-31-16-204" Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.175 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:30.296515 containerd[2102]: 2025-09-12 17:36:30.176 [INFO][5488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.5/26] IPv6=[] ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" HandleID="k8s-pod-network.15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.191 [INFO][5469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"be594a54-0fca-4de7-bded-7c1589b44a49", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"calico-apiserver-7948647f84-7bpbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib993a07d602", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.191 [INFO][5469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.5/32] ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.192 [INFO][5469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib993a07d602 ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.232 [INFO][5469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.242 [INFO][5469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"be594a54-0fca-4de7-bded-7c1589b44a49", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2", Pod:"calico-apiserver-7948647f84-7bpbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib993a07d602", MAC:"62:67:15:3d:57:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.297713 containerd[2102]: 2025-09-12 17:36:30.284 [INFO][5469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2" Namespace="calico-apiserver" Pod="calico-apiserver-7948647f84-7bpbq" WorkloadEndpoint="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:30.393081 systemd-networkd[1650]: cali778e5588021: Link UP Sep 12 17:36:30.400926 systemd-networkd[1650]: cali778e5588021: Gained carrier Sep 12 17:36:30.465781 systemd-networkd[1650]: cali21f5537f251: Gained IPv6LL Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:29.977 [INFO][5456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0 csi-node-driver- calico-system 4c274700-5d2a-486e-a911-3e7d7162510d 925 0 2025-09-12 17:36:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-204 csi-node-driver-c8fhr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali778e5588021 [] [] }} ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:29.977 [INFO][5456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.103 [INFO][5495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" HandleID="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.105 [INFO][5495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" HandleID="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-204", "pod":"csi-node-driver-c8fhr", "timestamp":"2025-09-12 17:36:30.103161722 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.106 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.176 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.176 [INFO][5495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.199 [INFO][5495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.228 [INFO][5495] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.253 [INFO][5495] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.261 [INFO][5495] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.286 [INFO][5495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.286 [INFO][5495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.300 [INFO][5495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.337 [INFO][5495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.365 [INFO][5495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.6/26] block=192.168.48.0/26 handle="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.365 [INFO][5495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.6/26] handle="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" host="ip-172-31-16-204" Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.365 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:30.494995 containerd[2102]: 2025-09-12 17:36:30.365 [INFO][5495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.6/26] IPv6=[] ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" HandleID="k8s-pod-network.eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.380 [INFO][5456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c274700-5d2a-486e-a911-3e7d7162510d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"csi-node-driver-c8fhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali778e5588021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.381 [INFO][5456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.6/32] ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.381 [INFO][5456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali778e5588021 ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.406 [INFO][5456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.418 [INFO][5456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c274700-5d2a-486e-a911-3e7d7162510d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a", Pod:"csi-node-driver-c8fhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali778e5588021", MAC:"7a:00:96:97:65:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:30.498748 containerd[2102]: 2025-09-12 17:36:30.450 [INFO][5456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a" Namespace="calico-system" Pod="csi-node-driver-c8fhr" WorkloadEndpoint="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:30.530237 containerd[2102]: time="2025-09-12T17:36:30.530193938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-ct4ql,Uid:2be750b9-c275-4533-bf66-976d561de541,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db\"" Sep 12 17:36:30.550707 containerd[2102]: time="2025-09-12T17:36:30.548529005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:30.550707 containerd[2102]: time="2025-09-12T17:36:30.549966172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:30.550707 containerd[2102]: time="2025-09-12T17:36:30.549998633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.550707 containerd[2102]: time="2025-09-12T17:36:30.550176332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.626064 containerd[2102]: time="2025-09-12T17:36:30.624165353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:30.626064 containerd[2102]: time="2025-09-12T17:36:30.624245985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:30.626064 containerd[2102]: time="2025-09-12T17:36:30.624269861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.626064 containerd[2102]: time="2025-09-12T17:36:30.624401031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:30.629370 systemd[1]: run-netns-cni\x2d521537d3\x2da0d9\x2d52f3\x2dc281\x2de46907fd26bb.mount: Deactivated successfully. Sep 12 17:36:30.633029 systemd[1]: run-netns-cni\x2dcf15eb2e\x2d7971\x2d1b1f\x2d396e\x2d0a01fd9846c9.mount: Deactivated successfully. Sep 12 17:36:30.754925 kubelet[3633]: I0912 17:36:30.754120 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x9v4c" podStartSLOduration=43.754096463 podStartE2EDuration="43.754096463s" podCreationTimestamp="2025-09-12 17:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:30.753265679 +0000 UTC m=+49.759385311" watchObservedRunningTime="2025-09-12 17:36:30.754096463 +0000 UTC m=+49.760216094" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.578 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.578 [INFO][5554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" iface="eth0" netns="/var/run/netns/cni-5863a7d0-2c01-d708-6620-15ba1b0e0663" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.579 [INFO][5554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" iface="eth0" netns="/var/run/netns/cni-5863a7d0-2c01-d708-6620-15ba1b0e0663" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.580 [INFO][5554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" iface="eth0" netns="/var/run/netns/cni-5863a7d0-2c01-d708-6620-15ba1b0e0663" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.580 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.580 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.774 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.775 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.775 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.798 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.798 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.808 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:30.838197 containerd[2102]: 2025-09-12 17:36:30.821 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:30.842439 containerd[2102]: time="2025-09-12T17:36:30.839683414Z" level=info msg="TearDown network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" successfully" Sep 12 17:36:30.842439 containerd[2102]: time="2025-09-12T17:36:30.839754468Z" level=info msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" returns successfully" Sep 12 17:36:30.842439 containerd[2102]: time="2025-09-12T17:36:30.841924680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fx5v5,Uid:d68a7ce9-62a6-403c-81fa-26b71803f67f,Namespace:kube-system,Attempt:1,}" Sep 12 17:36:30.851334 systemd[1]: run-netns-cni\x2d5863a7d0\x2d2c01\x2dd708\x2d6620\x2d15ba1b0e0663.mount: Deactivated successfully. Sep 12 17:36:30.906762 containerd[2102]: time="2025-09-12T17:36:30.904991472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8fhr,Uid:4c274700-5d2a-486e-a911-3e7d7162510d,Namespace:calico-system,Attempt:1,} returns sandbox id \"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a\"" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.666 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.667 [INFO][5555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" iface="eth0" netns="/var/run/netns/cni-b246db4e-d92b-ea5f-a45d-7305c8bee1bf" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.667 [INFO][5555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" iface="eth0" netns="/var/run/netns/cni-b246db4e-d92b-ea5f-a45d-7305c8bee1bf" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.668 [INFO][5555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" iface="eth0" netns="/var/run/netns/cni-b246db4e-d92b-ea5f-a45d-7305c8bee1bf" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.668 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.668 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.813 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.814 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.815 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.858 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.860 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.863 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:30.919690 containerd[2102]: 2025-09-12 17:36:30.889 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:30.922916 containerd[2102]: time="2025-09-12T17:36:30.920642375Z" level=info msg="TearDown network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" successfully" Sep 12 17:36:30.922916 containerd[2102]: time="2025-09-12T17:36:30.920989286Z" level=info msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" returns successfully" Sep 12 17:36:30.928576 containerd[2102]: time="2025-09-12T17:36:30.928120543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2hmns,Uid:cc21aef5-fbe0-49ed-bfa6-99bf18c52532,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:31.058282 containerd[2102]: time="2025-09-12T17:36:31.057979418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948647f84-7bpbq,Uid:be594a54-0fca-4de7-bded-7c1589b44a49,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2\"" Sep 12 17:36:31.294073 systemd-networkd[1650]: calib11c6256466: Link UP Sep 12 17:36:31.296886 systemd-networkd[1650]: calib11c6256466: Gained carrier Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.069 [INFO][5697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0 coredns-7c65d6cfc9- kube-system d68a7ce9-62a6-403c-81fa-26b71803f67f 944 0 2025-09-12 17:35:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-204 coredns-7c65d6cfc9-fx5v5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib11c6256466 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.069 [INFO][5697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.178 [INFO][5729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" HandleID="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.178 [INFO][5729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" HandleID="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-204", "pod":"coredns-7c65d6cfc9-fx5v5", "timestamp":"2025-09-12 17:36:31.178385193 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.179 [INFO][5729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.179 [INFO][5729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.180 [INFO][5729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.212 [INFO][5729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.223 [INFO][5729] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.237 [INFO][5729] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.241 [INFO][5729] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.245 [INFO][5729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.245 [INFO][5729] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.248 [INFO][5729] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2 Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.258 [INFO][5729] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5729] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.7/26] block=192.168.48.0/26 handle="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.7/26] handle="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" host="ip-172-31-16-204" Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:31.329193 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.7/26] IPv6=[] ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" HandleID="k8s-pod-network.450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.288 [INFO][5697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d68a7ce9-62a6-403c-81fa-26b71803f67f", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"coredns-7c65d6cfc9-fx5v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib11c6256466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.288 [INFO][5697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.7/32] ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.288 [INFO][5697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib11c6256466 ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.295 [INFO][5697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.296 [INFO][5697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d68a7ce9-62a6-403c-81fa-26b71803f67f", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2", Pod:"coredns-7c65d6cfc9-fx5v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib11c6256466", MAC:"3e:f9:f7:29:86:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:31.334319 containerd[2102]: 2025-09-12 17:36:31.319 [INFO][5697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fx5v5" WorkloadEndpoint="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:31.426566 systemd-networkd[1650]: calif477b5bd8f7: Link UP Sep 12 17:36:31.429784 systemd-networkd[1650]: calif477b5bd8f7: Gained carrier Sep 12 17:36:31.433102 containerd[2102]: time="2025-09-12T17:36:31.432018589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:31.440553 containerd[2102]: time="2025-09-12T17:36:31.435034586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:31.441821 containerd[2102]: time="2025-09-12T17:36:31.439886168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:31.441821 containerd[2102]: time="2025-09-12T17:36:31.440033217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.129 [INFO][5707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0 goldmane-7988f88666- calico-system cc21aef5-fbe0-49ed-bfa6-99bf18c52532 945 0 2025-09-12 17:36:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-204 goldmane-7988f88666-2hmns eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif477b5bd8f7 [] [] }} ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.129 [INFO][5707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.237 [INFO][5737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" HandleID="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.239 [INFO][5737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" HandleID="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000557950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-204", "pod":"goldmane-7988f88666-2hmns", "timestamp":"2025-09-12 17:36:31.23755872 +0000 UTC"}, Hostname:"ip-172-31-16-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.239 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.277 [INFO][5737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-204' Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.303 [INFO][5737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.331 [INFO][5737] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.369 [INFO][5737] ipam/ipam.go 511: Trying affinity for 192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.374 [INFO][5737] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.378 [INFO][5737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.379 [INFO][5737] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.382 [INFO][5737] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9 Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.393 [INFO][5737] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.409 [INFO][5737] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.8/26] block=192.168.48.0/26 handle="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.409 [INFO][5737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.8/26] handle="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" host="ip-172-31-16-204" Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.409 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:31.480942 containerd[2102]: 2025-09-12 17:36:31.409 [INFO][5737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.8/26] IPv6=[] ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" HandleID="k8s-pod-network.614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.417 [INFO][5707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cc21aef5-fbe0-49ed-bfa6-99bf18c52532", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"", Pod:"goldmane-7988f88666-2hmns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif477b5bd8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.418 [INFO][5707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.8/32] ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.418 [INFO][5707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif477b5bd8f7 ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.431 [INFO][5707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.433 [INFO][5707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cc21aef5-fbe0-49ed-bfa6-99bf18c52532", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9", Pod:"goldmane-7988f88666-2hmns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif477b5bd8f7", MAC:"4a:26:e7:e7:91:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:31.483976 containerd[2102]: 2025-09-12 17:36:31.461 [INFO][5707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9" Namespace="calico-system" Pod="goldmane-7988f88666-2hmns" WorkloadEndpoint="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:31.600033 containerd[2102]: time="2025-09-12T17:36:31.598633152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:31.600033 containerd[2102]: time="2025-09-12T17:36:31.598705984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:31.600033 containerd[2102]: time="2025-09-12T17:36:31.598797102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:31.600033 containerd[2102]: time="2025-09-12T17:36:31.599405474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:31.617499 systemd[1]: run-netns-cni\x2db246db4e\x2dd92b\x2dea5f\x2da45d\x2d7305c8bee1bf.mount: Deactivated successfully. Sep 12 17:36:31.648045 containerd[2102]: time="2025-09-12T17:36:31.647983111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fx5v5,Uid:d68a7ce9-62a6-403c-81fa-26b71803f67f,Namespace:kube-system,Attempt:1,} returns sandbox id \"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2\"" Sep 12 17:36:31.708291 containerd[2102]: time="2025-09-12T17:36:31.708234407Z" level=info msg="CreateContainer within sandbox \"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:36:31.742295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469272571.mount: Deactivated successfully. Sep 12 17:36:31.745411 systemd-networkd[1650]: cali778e5588021: Gained IPv6LL Sep 12 17:36:31.746360 systemd-networkd[1650]: cali263a3d7d0bb: Gained IPv6LL Sep 12 17:36:31.769179 containerd[2102]: time="2025-09-12T17:36:31.769137764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2hmns,Uid:cc21aef5-fbe0-49ed-bfa6-99bf18c52532,Namespace:calico-system,Attempt:1,} returns sandbox id \"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9\"" Sep 12 17:36:31.814237 containerd[2102]: time="2025-09-12T17:36:31.814110961Z" level=info msg="CreateContainer within sandbox \"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4001b1d1d7b8d11aecd45ed344f2d9749f6a8e570a31546ca11703b8961c4dcb\"" Sep 12 17:36:31.817541 containerd[2102]: time="2025-09-12T17:36:31.815932995Z" level=info msg="StartContainer for \"4001b1d1d7b8d11aecd45ed344f2d9749f6a8e570a31546ca11703b8961c4dcb\"" Sep 12 17:36:31.873004 systemd-networkd[1650]: calib993a07d602: Gained IPv6LL Sep 12 17:36:31.919352 containerd[2102]: time="2025-09-12T17:36:31.919298231Z" level=info msg="StartContainer for \"4001b1d1d7b8d11aecd45ed344f2d9749f6a8e570a31546ca11703b8961c4dcb\" returns successfully" Sep 12 17:36:32.687057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624466774.mount: Deactivated successfully. Sep 12 17:36:32.716257 containerd[2102]: time="2025-09-12T17:36:32.716202529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:32.717937 containerd[2102]: time="2025-09-12T17:36:32.717877212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:36:32.720100 containerd[2102]: time="2025-09-12T17:36:32.720025740Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:32.723857 containerd[2102]: time="2025-09-12T17:36:32.723788529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:32.724690 containerd[2102]: time="2025-09-12T17:36:32.724646584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.415242308s" Sep 12 17:36:32.725564 containerd[2102]: time="2025-09-12T17:36:32.724696455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:36:32.727172 containerd[2102]: time="2025-09-12T17:36:32.727138738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:36:32.729282 containerd[2102]: time="2025-09-12T17:36:32.729250712Z" level=info msg="CreateContainer within sandbox \"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:36:32.769082 systemd-networkd[1650]: calif477b5bd8f7: Gained IPv6LL Sep 12 17:36:32.840653 kubelet[3633]: I0912 17:36:32.837019 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fx5v5" podStartSLOduration=45.836994095 podStartE2EDuration="45.836994095s" podCreationTimestamp="2025-09-12 17:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:32.812522342 +0000 UTC m=+51.818641971" watchObservedRunningTime="2025-09-12 17:36:32.836994095 +0000 UTC m=+51.843113731" Sep 12 17:36:32.878202 containerd[2102]: time="2025-09-12T17:36:32.878091740Z" level=info msg="CreateContainer within sandbox \"34ce3b4279fd8065fb6e7bb21b9c81fcb9cb00d68a8c6734239bb7026822d2f2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5050bbb384335ed4b8579e67b7584d429f6035aa4c4f358ed9f6343c7508f9f8\"" Sep 12 17:36:32.881695 containerd[2102]: time="2025-09-12T17:36:32.879892532Z" level=info msg="StartContainer for \"5050bbb384335ed4b8579e67b7584d429f6035aa4c4f358ed9f6343c7508f9f8\"" Sep 12 17:36:33.073467 containerd[2102]: time="2025-09-12T17:36:33.073309870Z" level=info msg="StartContainer for \"5050bbb384335ed4b8579e67b7584d429f6035aa4c4f358ed9f6343c7508f9f8\" returns successfully" Sep 12 17:36:33.345171 systemd-networkd[1650]: calib11c6256466: Gained IPv6LL Sep 12 17:36:35.702589 ntpd[2053]: Listen normally on 6 vxlan.calico 192.168.48.0:123 Sep 12 17:36:35.702682 ntpd[2053]: Listen normally on 7 calica7affe24fc [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 6 vxlan.calico 192.168.48.0:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 7 calica7affe24fc [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 8 vxlan.calico [fe80::64ee:b9ff:feea:6a3%5]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 9 cali21f5537f251 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 10 calied58e862fe7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 11 cali263a3d7d0bb [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 12 calib993a07d602 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 13 cali778e5588021 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 14 calib11c6256466 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:36:35.705877 ntpd[2053]: 12 Sep 17:36:35 ntpd[2053]: Listen normally on 15 calif477b5bd8f7 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:36:35.703213 ntpd[2053]: Listen normally on 8 vxlan.calico [fe80::64ee:b9ff:feea:6a3%5]:123 Sep 12 17:36:35.703263 ntpd[2053]: Listen normally on 9 cali21f5537f251 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:36:35.703291 ntpd[2053]: Listen normally on 10 calied58e862fe7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:36:35.703319 ntpd[2053]: Listen normally on 11 cali263a3d7d0bb [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:36:35.703351 ntpd[2053]: Listen normally on 12 calib993a07d602 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:36:35.703389 ntpd[2053]: Listen normally on 13 cali778e5588021 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:36:35.703428 ntpd[2053]: Listen normally on 14 calib11c6256466 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:36:35.703466 ntpd[2053]: Listen normally on 15 calif477b5bd8f7 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:36:36.079516 containerd[2102]: time="2025-09-12T17:36:36.079436989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:36:36.080554 containerd[2102]: time="2025-09-12T17:36:36.080463822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:36.083961 containerd[2102]: time="2025-09-12T17:36:36.083887286Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:36.084763 containerd[2102]: time="2025-09-12T17:36:36.084692431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.357509286s" Sep 12 17:36:36.086194 containerd[2102]: time="2025-09-12T17:36:36.084770990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:36:36.086194 containerd[2102]: time="2025-09-12T17:36:36.085472230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:36.136173 containerd[2102]: time="2025-09-12T17:36:36.136139656Z" level=info msg="CreateContainer within sandbox \"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:36:36.157450 containerd[2102]: time="2025-09-12T17:36:36.157418711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:36.167856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733427320.mount: Deactivated successfully. Sep 12 17:36:36.196546 containerd[2102]: time="2025-09-12T17:36:36.196498015Z" level=info msg="CreateContainer within sandbox \"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f19e6d04f7fc8a58d6f31f5f199d3b0bda0380e1a6c190678e5a11eb57845fc1\"" Sep 12 17:36:36.212426 containerd[2102]: time="2025-09-12T17:36:36.212383452Z" level=info msg="StartContainer for \"f19e6d04f7fc8a58d6f31f5f199d3b0bda0380e1a6c190678e5a11eb57845fc1\"" Sep 12 17:36:36.356270 containerd[2102]: time="2025-09-12T17:36:36.356141180Z" level=info msg="StartContainer for \"f19e6d04f7fc8a58d6f31f5f199d3b0bda0380e1a6c190678e5a11eb57845fc1\" returns successfully" Sep 12 17:36:36.824884 kubelet[3633]: I0912 17:36:36.823706 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cc6686689-66mtx" podStartSLOduration=5.984642786 podStartE2EDuration="11.823319022s" podCreationTimestamp="2025-09-12 17:36:25 +0000 UTC" firstStartedPulling="2025-09-12 17:36:26.887278847 +0000 UTC m=+45.893398463" lastFinishedPulling="2025-09-12 17:36:32.725955091 +0000 UTC m=+51.732074699" observedRunningTime="2025-09-12 17:36:33.81423951 +0000 UTC m=+52.820359151" watchObservedRunningTime="2025-09-12 17:36:36.823319022 +0000 UTC m=+55.829438666" Sep 12 17:36:36.826632 kubelet[3633]: I0912 17:36:36.826498 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699d545876-frxkl" podStartSLOduration=26.241105066 podStartE2EDuration="32.826477282s" podCreationTimestamp="2025-09-12 17:36:04 +0000 UTC" firstStartedPulling="2025-09-12 17:36:29.502007869 +0000 UTC m=+48.508127477" lastFinishedPulling="2025-09-12 17:36:36.087380073 +0000 UTC m=+55.093499693" observedRunningTime="2025-09-12 17:36:36.821981126 +0000 UTC m=+55.828100756" watchObservedRunningTime="2025-09-12 17:36:36.826477282 +0000 UTC m=+55.832596910" Sep 12 17:36:37.509392 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:37.505921 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:37.505961 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:37.944105 systemd[1]: Started sshd@9-172.31.16.204:22-147.75.109.163:49036.service - OpenSSH per-connection server daemon (147.75.109.163:49036). Sep 12 17:36:38.188518 sshd[6025]: Accepted publickey for core from 147.75.109.163 port 49036 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:38.194383 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:38.212130 systemd-logind[2070]: New session 10 of user core. Sep 12 17:36:38.223350 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:36:39.376070 sshd[6025]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:39.390192 systemd[1]: sshd@9-172.31.16.204:22-147.75.109.163:49036.service: Deactivated successfully. Sep 12 17:36:39.397890 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:36:39.399494 systemd-logind[2070]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:36:39.414096 systemd-logind[2070]: Removed session 10. Sep 12 17:36:39.555834 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:39.554176 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:39.554206 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:40.140388 containerd[2102]: time="2025-09-12T17:36:40.140319144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.145377 containerd[2102]: time="2025-09-12T17:36:40.145086145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:36:40.163758 containerd[2102]: time="2025-09-12T17:36:40.163001765Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.167245 containerd[2102]: time="2025-09-12T17:36:40.167196893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.167944 containerd[2102]: time="2025-09-12T17:36:40.167904733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.010311079s" Sep 12 17:36:40.168049 containerd[2102]: time="2025-09-12T17:36:40.167951309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:40.287267 containerd[2102]: time="2025-09-12T17:36:40.286958514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:36:40.292616 containerd[2102]: time="2025-09-12T17:36:40.292105803Z" level=info msg="CreateContainer within sandbox \"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:40.326029 containerd[2102]: time="2025-09-12T17:36:40.324998400Z" level=info msg="CreateContainer within sandbox \"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3cf4bc9fc846393f785fd7559d02fc9789508cd2aaa4cb5ff6ed24b462df590a\"" Sep 12 17:36:40.326698 containerd[2102]: time="2025-09-12T17:36:40.326397675Z" level=info msg="StartContainer for \"3cf4bc9fc846393f785fd7559d02fc9789508cd2aaa4cb5ff6ed24b462df590a\"" Sep 12 17:36:40.510633 containerd[2102]: time="2025-09-12T17:36:40.510585530Z" level=info msg="StartContainer for \"3cf4bc9fc846393f785fd7559d02fc9789508cd2aaa4cb5ff6ed24b462df590a\" returns successfully" Sep 12 17:36:41.018232 kubelet[3633]: I0912 17:36:41.018131 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7948647f84-ct4ql" podStartSLOduration=34.324229219 podStartE2EDuration="43.98070447s" podCreationTimestamp="2025-09-12 17:35:57 +0000 UTC" firstStartedPulling="2025-09-12 17:36:30.536010064 +0000 UTC m=+49.542129673" lastFinishedPulling="2025-09-12 17:36:40.192485304 +0000 UTC m=+59.198604924" observedRunningTime="2025-09-12 17:36:40.934773502 +0000 UTC m=+59.940893133" watchObservedRunningTime="2025-09-12 17:36:40.98070447 +0000 UTC m=+59.986824100" Sep 12 17:36:41.486641 containerd[2102]: time="2025-09-12T17:36:41.486581678Z" level=info msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" Sep 12 17:36:41.610918 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:41.610191 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:41.610219 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.068 [WARNING][6104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f5cdb510-d3ed-48d3-9fb7-62a04476b44d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83", Pod:"coredns-7c65d6cfc9-x9v4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied58e862fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.073 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.073 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" iface="eth0" netns="" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.073 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.073 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.768 [INFO][6115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.773 [INFO][6115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.774 [INFO][6115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.803 [WARNING][6115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.803 [INFO][6115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.808 [INFO][6115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:42.817540 containerd[2102]: 2025-09-12 17:36:42.814 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:42.817540 containerd[2102]: time="2025-09-12T17:36:42.817444787Z" level=info msg="TearDown network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" successfully" Sep 12 17:36:42.817540 containerd[2102]: time="2025-09-12T17:36:42.817479773Z" level=info msg="StopPodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" returns successfully" Sep 12 17:36:42.902108 containerd[2102]: time="2025-09-12T17:36:42.902057669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.904500 containerd[2102]: time="2025-09-12T17:36:42.904440121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:36:42.907903 containerd[2102]: time="2025-09-12T17:36:42.907855091Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.911339 containerd[2102]: time="2025-09-12T17:36:42.911304738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.912478 containerd[2102]: time="2025-09-12T17:36:42.912366347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.625351378s" Sep 12 17:36:42.912478 containerd[2102]: time="2025-09-12T17:36:42.912411939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:36:42.923250 containerd[2102]: time="2025-09-12T17:36:42.923102485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:42.949384 containerd[2102]: time="2025-09-12T17:36:42.949231911Z" level=info msg="CreateContainer within sandbox \"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:36:43.009766 containerd[2102]: time="2025-09-12T17:36:43.009610962Z" level=info msg="CreateContainer within sandbox \"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"081196a2549465d665d930e9f2c90a9856caaa9ad03272e5b3d0fccc8f49d743\"" Sep 12 17:36:43.011816 containerd[2102]: time="2025-09-12T17:36:43.011239789Z" level=info msg="StartContainer for \"081196a2549465d665d930e9f2c90a9856caaa9ad03272e5b3d0fccc8f49d743\"" Sep 12 17:36:43.023193 containerd[2102]: time="2025-09-12T17:36:43.023125415Z" level=info msg="RemovePodSandbox for \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" Sep 12 17:36:43.031970 containerd[2102]: time="2025-09-12T17:36:43.031573835Z" level=info msg="Forcibly stopping sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\"" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.101 [WARNING][6146] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f5cdb510-d3ed-48d3-9fb7-62a04476b44d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"16038d42e798dff5e8189a113560fef2920f3daf4cafabaa7d57459581130d83", Pod:"coredns-7c65d6cfc9-x9v4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied58e862fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.102 [INFO][6146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.102 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" iface="eth0" netns="" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.102 [INFO][6146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.102 [INFO][6146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.174 [INFO][6158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.175 [INFO][6158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.175 [INFO][6158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.187 [WARNING][6158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.187 [INFO][6158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" HandleID="k8s-pod-network.33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--x9v4c-eth0" Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.190 [INFO][6158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:43.204853 containerd[2102]: 2025-09-12 17:36:43.197 [INFO][6146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288" Sep 12 17:36:43.204853 containerd[2102]: time="2025-09-12T17:36:43.204795420Z" level=info msg="TearDown network for sandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" successfully" Sep 12 17:36:43.256540 containerd[2102]: time="2025-09-12T17:36:43.255561519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:43.278830 containerd[2102]: time="2025-09-12T17:36:43.278685735Z" level=info msg="RemovePodSandbox \"33680a3fc0c8fb79189d378975230e145afe63b269acf5767c7946a043de1288\" returns successfully" Sep 12 17:36:43.300096 containerd[2102]: time="2025-09-12T17:36:43.300043419Z" level=info msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" Sep 12 17:36:43.332993 containerd[2102]: time="2025-09-12T17:36:43.332394903Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:43.335960 containerd[2102]: time="2025-09-12T17:36:43.335566328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:36:43.371734 containerd[2102]: time="2025-09-12T17:36:43.367018586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 443.850552ms" Sep 12 17:36:43.371734 containerd[2102]: time="2025-09-12T17:36:43.367071320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:43.402247 containerd[2102]: time="2025-09-12T17:36:43.402196679Z" level=info msg="StartContainer for \"081196a2549465d665d930e9f2c90a9856caaa9ad03272e5b3d0fccc8f49d743\" returns successfully" Sep 12 17:36:43.441158 containerd[2102]: time="2025-09-12T17:36:43.439957949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:36:43.502089 containerd[2102]: time="2025-09-12T17:36:43.502036630Z" level=info msg="CreateContainer within sandbox \"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:43.545143 containerd[2102]: time="2025-09-12T17:36:43.542134662Z" level=info msg="CreateContainer within sandbox \"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d6822c1734d3615ff28fd1e27b9dd5eebdef92df52a63c99e7058bf301a10da\"" Sep 12 17:36:43.547989 containerd[2102]: time="2025-09-12T17:36:43.547879761Z" level=info msg="StartContainer for \"4d6822c1734d3615ff28fd1e27b9dd5eebdef92df52a63c99e7058bf301a10da\"" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.426 [WARNING][6189] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.426 [INFO][6189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.426 [INFO][6189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" iface="eth0" netns="" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.426 [INFO][6189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.426 [INFO][6189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.562 [INFO][6200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.563 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.563 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.573 [WARNING][6200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.573 [INFO][6200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.580 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:43.587040 containerd[2102]: 2025-09-12 17:36:43.584 [INFO][6189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.587040 containerd[2102]: time="2025-09-12T17:36:43.586389488Z" level=info msg="TearDown network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" successfully" Sep 12 17:36:43.587040 containerd[2102]: time="2025-09-12T17:36:43.586421826Z" level=info msg="StopPodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" returns successfully" Sep 12 17:36:43.587040 containerd[2102]: time="2025-09-12T17:36:43.587014029Z" level=info msg="RemovePodSandbox for \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" Sep 12 17:36:43.587040 containerd[2102]: time="2025-09-12T17:36:43.587048553Z" level=info msg="Forcibly stopping sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\"" Sep 12 17:36:43.650971 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:43.650628 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:43.650680 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:43.735739 containerd[2102]: time="2025-09-12T17:36:43.735644777Z" level=info msg="StartContainer for \"4d6822c1734d3615ff28fd1e27b9dd5eebdef92df52a63c99e7058bf301a10da\" returns successfully" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.667 [WARNING][6216] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" WorkloadEndpoint="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.668 [INFO][6216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.668 [INFO][6216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" iface="eth0" netns="" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.668 [INFO][6216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.668 [INFO][6216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.717 [INFO][6246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.718 [INFO][6246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.718 [INFO][6246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.729 [WARNING][6246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.731 [INFO][6246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" HandleID="k8s-pod-network.be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Workload="ip--172--31--16--204-k8s-whisker--779f76fdb--m4z9n-eth0" Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.736 [INFO][6246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:43.743688 containerd[2102]: 2025-09-12 17:36:43.740 [INFO][6216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e" Sep 12 17:36:43.744420 containerd[2102]: time="2025-09-12T17:36:43.743780697Z" level=info msg="TearDown network for sandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" successfully" Sep 12 17:36:43.753946 containerd[2102]: time="2025-09-12T17:36:43.753815965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:43.753946 containerd[2102]: time="2025-09-12T17:36:43.753902844Z" level=info msg="RemovePodSandbox \"be021b360c7e230ab9375cb1eb33804850262ffc58c17b9db553f3075d252c1e\" returns successfully" Sep 12 17:36:43.758777 containerd[2102]: time="2025-09-12T17:36:43.758633031Z" level=info msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.815 [WARNING][6270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c274700-5d2a-486e-a911-3e7d7162510d", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a", Pod:"csi-node-driver-c8fhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali778e5588021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.815 [INFO][6270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.815 [INFO][6270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" iface="eth0" netns="" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.815 [INFO][6270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.815 [INFO][6270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.861 [INFO][6278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.862 [INFO][6278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.862 [INFO][6278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.869 [WARNING][6278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.871 [INFO][6278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.873 [INFO][6278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:43.879806 containerd[2102]: 2025-09-12 17:36:43.876 [INFO][6270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:43.883277 containerd[2102]: time="2025-09-12T17:36:43.879868982Z" level=info msg="TearDown network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" successfully" Sep 12 17:36:43.883277 containerd[2102]: time="2025-09-12T17:36:43.879902436Z" level=info msg="StopPodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" returns successfully" Sep 12 17:36:43.883277 containerd[2102]: time="2025-09-12T17:36:43.880433430Z" level=info msg="RemovePodSandbox for \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" Sep 12 17:36:43.883277 containerd[2102]: time="2025-09-12T17:36:43.880464452Z" level=info msg="Forcibly stopping sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\"" Sep 12 17:36:44.014440 kubelet[3633]: I0912 17:36:43.997933 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7948647f84-7bpbq" podStartSLOduration=34.639614579 podStartE2EDuration="46.971119754s" podCreationTimestamp="2025-09-12 17:35:57 +0000 UTC" firstStartedPulling="2025-09-12 17:36:31.063941548 +0000 UTC m=+50.070061158" lastFinishedPulling="2025-09-12 17:36:43.39544671 +0000 UTC m=+62.401566333" observedRunningTime="2025-09-12 17:36:43.968869435 +0000 UTC m=+62.974989064" watchObservedRunningTime="2025-09-12 17:36:43.971119754 +0000 UTC m=+62.977239383" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:43.994 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c274700-5d2a-486e-a911-3e7d7162510d", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a", Pod:"csi-node-driver-c8fhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali778e5588021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:43.997 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:43.997 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" iface="eth0" netns="" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:43.997 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:43.997 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.065 [INFO][6304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.065 [INFO][6304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.065 [INFO][6304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.075 [WARNING][6304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.075 [INFO][6304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" HandleID="k8s-pod-network.c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Workload="ip--172--31--16--204-k8s-csi--node--driver--c8fhr-eth0" Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.078 [INFO][6304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:44.084105 containerd[2102]: 2025-09-12 17:36:44.081 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a" Sep 12 17:36:44.085562 containerd[2102]: time="2025-09-12T17:36:44.084148462Z" level=info msg="TearDown network for sandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" successfully" Sep 12 17:36:44.201084 containerd[2102]: time="2025-09-12T17:36:44.200858787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:44.201084 containerd[2102]: time="2025-09-12T17:36:44.200960207Z" level=info msg="RemovePodSandbox \"c86013caa2b5d500272705c93ecbef1e8ee912c9e63e3fe6b48334fc9b10b71a\" returns successfully" Sep 12 17:36:44.202150 containerd[2102]: time="2025-09-12T17:36:44.201938950Z" level=info msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.280 [WARNING][6320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0", GenerateName:"calico-kube-controllers-699d545876-", Namespace:"calico-system", SelfLink:"", UID:"75effd7d-c738-4b1f-a43c-f81ac2da3610", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d545876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9", Pod:"calico-kube-controllers-699d545876-frxkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21f5537f251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.282 [INFO][6320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.282 [INFO][6320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" iface="eth0" netns="" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.282 [INFO][6320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.282 [INFO][6320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.324 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.324 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.324 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.332 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.333 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.335 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:44.345028 containerd[2102]: 2025-09-12 17:36:44.340 [INFO][6320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.345028 containerd[2102]: time="2025-09-12T17:36:44.344530932Z" level=info msg="TearDown network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" successfully" Sep 12 17:36:44.345028 containerd[2102]: time="2025-09-12T17:36:44.344575388Z" level=info msg="StopPodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" returns successfully" Sep 12 17:36:44.347072 containerd[2102]: time="2025-09-12T17:36:44.346248219Z" level=info msg="RemovePodSandbox for \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" Sep 12 17:36:44.347072 containerd[2102]: time="2025-09-12T17:36:44.346286073Z" level=info msg="Forcibly stopping sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\"" Sep 12 17:36:44.415817 systemd[1]: Started sshd@10-172.31.16.204:22-147.75.109.163:46986.service - OpenSSH per-connection server daemon (147.75.109.163:46986). Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.442 [WARNING][6343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0", GenerateName:"calico-kube-controllers-699d545876-", Namespace:"calico-system", SelfLink:"", UID:"75effd7d-c738-4b1f-a43c-f81ac2da3610", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d545876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"68282927b0b8ffe2ea6ee2e7ddd7e6ec4b1e7d0bc09c54886a00d1b1f012c7b9", Pod:"calico-kube-controllers-699d545876-frxkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21f5537f251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.443 [INFO][6343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.443 [INFO][6343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" iface="eth0" netns="" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.443 [INFO][6343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.443 [INFO][6343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.510 [INFO][6352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.511 [INFO][6352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.511 [INFO][6352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.529 [WARNING][6352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.529 [INFO][6352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" HandleID="k8s-pod-network.18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Workload="ip--172--31--16--204-k8s-calico--kube--controllers--699d545876--frxkl-eth0" Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.534 [INFO][6352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:44.561577 containerd[2102]: 2025-09-12 17:36:44.545 [INFO][6343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45" Sep 12 17:36:44.561577 containerd[2102]: time="2025-09-12T17:36:44.555925913Z" level=info msg="TearDown network for sandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" successfully" Sep 12 17:36:44.584084 containerd[2102]: time="2025-09-12T17:36:44.584036770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:44.584239 containerd[2102]: time="2025-09-12T17:36:44.584130468Z" level=info msg="RemovePodSandbox \"18ddb2112d883bdef74d55fd9094640d8ddd3a067549be62de02508085424f45\" returns successfully" Sep 12 17:36:44.584951 containerd[2102]: time="2025-09-12T17:36:44.584767749Z" level=info msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.674 [WARNING][6368] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"be594a54-0fca-4de7-bded-7c1589b44a49", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2", Pod:"calico-apiserver-7948647f84-7bpbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib993a07d602", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.674 [INFO][6368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.674 [INFO][6368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" iface="eth0" netns="" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.674 [INFO][6368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.674 [INFO][6368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.754 [INFO][6375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.755 [INFO][6375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.755 [INFO][6375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.769 [WARNING][6375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.769 [INFO][6375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.771 [INFO][6375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:44.777031 containerd[2102]: 2025-09-12 17:36:44.774 [INFO][6368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:44.780807 containerd[2102]: time="2025-09-12T17:36:44.777285992Z" level=info msg="TearDown network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" successfully" Sep 12 17:36:44.780807 containerd[2102]: time="2025-09-12T17:36:44.777346651Z" level=info msg="StopPodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" returns successfully" Sep 12 17:36:44.780807 containerd[2102]: time="2025-09-12T17:36:44.778917980Z" level=info msg="RemovePodSandbox for \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" Sep 12 17:36:44.780807 containerd[2102]: time="2025-09-12T17:36:44.779117163Z" level=info msg="Forcibly stopping sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\"" Sep 12 17:36:44.798113 sshd[6348]: Accepted publickey for core from 147.75.109.163 port 46986 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:44.804995 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:44.840496 systemd-logind[2070]: New session 11 of user core. Sep 12 17:36:44.846490 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:44.912 [WARNING][6389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"be594a54-0fca-4de7-bded-7c1589b44a49", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"15e4e4d66460fa19b844e6e089ceae2c86e29c1a8b362cb1fe61adbd471a3ae2", Pod:"calico-apiserver-7948647f84-7bpbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib993a07d602", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:44.912 [INFO][6389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:44.912 [INFO][6389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" iface="eth0" netns="" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:44.912 [INFO][6389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:44.912 [INFO][6389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.140 [INFO][6398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.142 [INFO][6398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.145 [INFO][6398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.177 [WARNING][6398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.177 [INFO][6398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" HandleID="k8s-pod-network.4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--7bpbq-eth0" Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.191 [INFO][6398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:45.239879 containerd[2102]: 2025-09-12 17:36:45.216 [INFO][6389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85" Sep 12 17:36:45.239879 containerd[2102]: time="2025-09-12T17:36:45.239787204Z" level=info msg="TearDown network for sandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" successfully" Sep 12 17:36:45.281222 containerd[2102]: time="2025-09-12T17:36:45.279256624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:45.281222 containerd[2102]: time="2025-09-12T17:36:45.280543383Z" level=info msg="RemovePodSandbox \"4678989d348039c052b2a30388464dba9152bdc94debb78c4b7d06985f94ae85\" returns successfully" Sep 12 17:36:45.289025 containerd[2102]: time="2025-09-12T17:36:45.288955707Z" level=info msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" Sep 12 17:36:45.699935 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:45.696821 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:45.696867 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:46.023582 systemd[1]: run-containerd-runc-k8s.io-f19e6d04f7fc8a58d6f31f5f199d3b0bda0380e1a6c190678e5a11eb57845fc1-runc.6oPRCg.mount: Deactivated successfully. Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.584 [WARNING][6416] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d68a7ce9-62a6-403c-81fa-26b71803f67f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2", Pod:"coredns-7c65d6cfc9-fx5v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib11c6256466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.592 [INFO][6416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.592 [INFO][6416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" iface="eth0" netns="" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.592 [INFO][6416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.593 [INFO][6416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.884 [INFO][6427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.884 [INFO][6427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.905 [INFO][6427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.939 [WARNING][6427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.939 [INFO][6427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.950 [INFO][6427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:46.083840 containerd[2102]: 2025-09-12 17:36:45.963 [INFO][6416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.084849 containerd[2102]: time="2025-09-12T17:36:46.084812923Z" level=info msg="TearDown network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" successfully" Sep 12 17:36:46.084964 containerd[2102]: time="2025-09-12T17:36:46.084947375Z" level=info msg="StopPodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" returns successfully" Sep 12 17:36:46.175429 containerd[2102]: time="2025-09-12T17:36:46.175377567Z" level=info msg="RemovePodSandbox for \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" Sep 12 17:36:46.175429 containerd[2102]: time="2025-09-12T17:36:46.175428619Z" level=info msg="Forcibly stopping sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\"" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.539 [WARNING][6462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d68a7ce9-62a6-403c-81fa-26b71803f67f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"450d77d8021ab3506a0d16d4b8bf80209e1181df7d1c11b43a4b84df542859e2", Pod:"coredns-7c65d6cfc9-fx5v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib11c6256466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.539 [INFO][6462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.539 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" iface="eth0" netns="" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.539 [INFO][6462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.540 [INFO][6462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.768 [INFO][6475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.769 [INFO][6475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.769 [INFO][6475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.804 [WARNING][6475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.806 [INFO][6475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" HandleID="k8s-pod-network.59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Workload="ip--172--31--16--204-k8s-coredns--7c65d6cfc9--fx5v5-eth0" Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.810 [INFO][6475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:46.849357 containerd[2102]: 2025-09-12 17:36:46.834 [INFO][6462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0" Sep 12 17:36:46.868128 containerd[2102]: time="2025-09-12T17:36:46.867570749Z" level=info msg="TearDown network for sandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" successfully" Sep 12 17:36:46.939938 containerd[2102]: time="2025-09-12T17:36:46.938354555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:46.939938 containerd[2102]: time="2025-09-12T17:36:46.938455214Z" level=info msg="RemovePodSandbox \"59a28fb3c890ed7c3698bc9075d2f7ab204fa212d4befb0e905ea220300af3c0\" returns successfully" Sep 12 17:36:46.993549 containerd[2102]: time="2025-09-12T17:36:46.993508271Z" level=info msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.186 [WARNING][6489] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be750b9-c275-4533-bf66-976d561de541", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db", Pod:"calico-apiserver-7948647f84-ct4ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali263a3d7d0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.188 [INFO][6489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.188 [INFO][6489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" iface="eth0" netns="" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.188 [INFO][6489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.188 [INFO][6489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.252 [INFO][6496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.252 [INFO][6496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.252 [INFO][6496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.267 [WARNING][6496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.267 [INFO][6496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.270 [INFO][6496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:47.277384 containerd[2102]: 2025-09-12 17:36:47.274 [INFO][6489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.279799 containerd[2102]: time="2025-09-12T17:36:47.278315070Z" level=info msg="TearDown network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" successfully" Sep 12 17:36:47.279799 containerd[2102]: time="2025-09-12T17:36:47.278354742Z" level=info msg="StopPodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" returns successfully" Sep 12 17:36:47.281500 containerd[2102]: time="2025-09-12T17:36:47.281225082Z" level=info msg="RemovePodSandbox for \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" Sep 12 17:36:47.281500 containerd[2102]: time="2025-09-12T17:36:47.281269143Z" level=info msg="Forcibly stopping sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\"" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.458 [WARNING][6510] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0", GenerateName:"calico-apiserver-7948647f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be750b9-c275-4533-bf66-976d561de541", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948647f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"f67d03ce90a90f42c71c6081b575093966b1c5693b8a5f886410b5e3b08e99db", Pod:"calico-apiserver-7948647f84-ct4ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali263a3d7d0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.459 [INFO][6510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.459 [INFO][6510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" iface="eth0" netns="" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.459 [INFO][6510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.460 [INFO][6510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.541 [INFO][6518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.542 [INFO][6518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.542 [INFO][6518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.555 [WARNING][6518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.556 [INFO][6518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" HandleID="k8s-pod-network.ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Workload="ip--172--31--16--204-k8s-calico--apiserver--7948647f84--ct4ql-eth0" Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.569 [INFO][6518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:47.595809 containerd[2102]: 2025-09-12 17:36:47.582 [INFO][6510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd" Sep 12 17:36:47.595809 containerd[2102]: time="2025-09-12T17:36:47.594754157Z" level=info msg="TearDown network for sandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" successfully" Sep 12 17:36:47.619562 containerd[2102]: time="2025-09-12T17:36:47.614341517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:47.619562 containerd[2102]: time="2025-09-12T17:36:47.614434238Z" level=info msg="RemovePodSandbox \"ecb57542c644c3822dcce7b15db447c309c5cd649f55df1e1ba2885dfbae4ddd\" returns successfully" Sep 12 17:36:47.683331 containerd[2102]: time="2025-09-12T17:36:47.682848378Z" level=info msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" Sep 12 17:36:47.747071 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:47.744815 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:47.744825 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.887 [WARNING][6535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cc21aef5-fbe0-49ed-bfa6-99bf18c52532", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9", Pod:"goldmane-7988f88666-2hmns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif477b5bd8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.888 [INFO][6535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.889 [INFO][6535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" iface="eth0" netns="" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.889 [INFO][6535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.889 [INFO][6535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.994 [INFO][6547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.994 [INFO][6547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:47.994 [INFO][6547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:48.006 [WARNING][6547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:48.006 [INFO][6547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:48.009 [INFO][6547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:48.028551 containerd[2102]: 2025-09-12 17:36:48.015 [INFO][6535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.028551 containerd[2102]: time="2025-09-12T17:36:48.028417353Z" level=info msg="TearDown network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" successfully" Sep 12 17:36:48.028551 containerd[2102]: time="2025-09-12T17:36:48.028449175Z" level=info msg="StopPodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" returns successfully" Sep 12 17:36:48.032336 containerd[2102]: time="2025-09-12T17:36:48.031815629Z" level=info msg="RemovePodSandbox for \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" Sep 12 17:36:48.032336 containerd[2102]: time="2025-09-12T17:36:48.032079753Z" level=info msg="Forcibly stopping sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\"" Sep 12 17:36:48.207608 sshd[6348]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:48.247512 systemd[1]: sshd@10-172.31.16.204:22-147.75.109.163:46986.service: Deactivated successfully. Sep 12 17:36:48.268428 systemd-logind[2070]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:36:48.268957 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.132 [WARNING][6561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cc21aef5-fbe0-49ed-bfa6-99bf18c52532", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-204", ContainerID:"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9", Pod:"goldmane-7988f88666-2hmns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif477b5bd8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.133 [INFO][6561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.133 [INFO][6561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" iface="eth0" netns="" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.133 [INFO][6561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.133 [INFO][6561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.199 [INFO][6568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.199 [INFO][6568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.199 [INFO][6568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.228 [WARNING][6568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.228 [INFO][6568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" HandleID="k8s-pod-network.7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Workload="ip--172--31--16--204-k8s-goldmane--7988f88666--2hmns-eth0" Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.242 [INFO][6568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:48.278636 containerd[2102]: 2025-09-12 17:36:48.266 [INFO][6561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a" Sep 12 17:36:48.278636 containerd[2102]: time="2025-09-12T17:36:48.278233599Z" level=info msg="TearDown network for sandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" successfully" Sep 12 17:36:48.289647 containerd[2102]: time="2025-09-12T17:36:48.282598165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:48.289647 containerd[2102]: time="2025-09-12T17:36:48.282805568Z" level=info msg="RemovePodSandbox \"7ea05edff963dd91efe28ad51f4267881ed4d4ce17729aa1c96e0902716a0b0a\" returns successfully" Sep 12 17:36:48.295004 systemd-logind[2070]: Removed session 11. Sep 12 17:36:48.907221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067531499.mount: Deactivated successfully. Sep 12 17:36:49.792806 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:49.792816 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:49.795474 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:50.444093 systemd[1]: run-containerd-runc-k8s.io-05bd44a016d54c77e0170d00d23323f3dbf9466568847faff92f2d7e62c1044c-runc.Izb18s.mount: Deactivated successfully. Sep 12 17:36:50.869181 containerd[2102]: time="2025-09-12T17:36:50.763366360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:36:50.897786 containerd[2102]: time="2025-09-12T17:36:50.897603787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:50.962571 containerd[2102]: time="2025-09-12T17:36:50.961852139Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:50.969868 containerd[2102]: time="2025-09-12T17:36:50.969520549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:51.013618 containerd[2102]: time="2025-09-12T17:36:51.013559113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 7.570377221s" Sep 12 17:36:51.027636 containerd[2102]: time="2025-09-12T17:36:51.027565788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:36:51.190799 containerd[2102]: time="2025-09-12T17:36:51.189185750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:36:51.480516 containerd[2102]: time="2025-09-12T17:36:51.480455071Z" level=info msg="CreateContainer within sandbox \"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:36:51.674018 containerd[2102]: time="2025-09-12T17:36:51.672405261Z" level=info msg="CreateContainer within sandbox \"614f25df5c94582c597a7d5044dc5cec92299faf80b025bad566d3e7a3b475a9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ccce3726f7c259e9b3cb1cb301f3924cf8a7b597ed4be46294a80c0fac1f0807\"" Sep 12 17:36:51.678480 containerd[2102]: time="2025-09-12T17:36:51.678085624Z" level=info msg="StartContainer for \"ccce3726f7c259e9b3cb1cb301f3924cf8a7b597ed4be46294a80c0fac1f0807\"" Sep 12 17:36:51.845609 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:51.842535 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:51.842575 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:52.191563 containerd[2102]: time="2025-09-12T17:36:52.191105311Z" level=info msg="StartContainer for \"ccce3726f7c259e9b3cb1cb301f3924cf8a7b597ed4be46294a80c0fac1f0807\" returns successfully" Sep 12 17:36:53.120321 systemd[1]: run-containerd-runc-k8s.io-ccce3726f7c259e9b3cb1cb301f3924cf8a7b597ed4be46294a80c0fac1f0807-runc.mEY7Pp.mount: Deactivated successfully. Sep 12 17:36:53.244976 kubelet[3633]: I0912 17:36:53.163535 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-2hmns" podStartSLOduration=30.704408781 podStartE2EDuration="50.075292213s" podCreationTimestamp="2025-09-12 17:36:03 +0000 UTC" firstStartedPulling="2025-09-12 17:36:31.778143109 +0000 UTC m=+50.784262727" lastFinishedPulling="2025-09-12 17:36:51.149026544 +0000 UTC m=+70.155146159" observedRunningTime="2025-09-12 17:36:53.074472699 +0000 UTC m=+72.080592331" watchObservedRunningTime="2025-09-12 17:36:53.075292213 +0000 UTC m=+72.081411843" Sep 12 17:36:53.264877 systemd[1]: Started sshd@11-172.31.16.204:22-147.75.109.163:41310.service - OpenSSH per-connection server daemon (147.75.109.163:41310). Sep 12 17:36:53.594020 sshd[6681]: Accepted publickey for core from 147.75.109.163 port 41310 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:53.601893 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:53.644354 systemd-logind[2070]: New session 12 of user core. Sep 12 17:36:53.648976 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:36:53.767786 containerd[2102]: time="2025-09-12T17:36:53.766042732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.768866 containerd[2102]: time="2025-09-12T17:36:53.768679890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:36:53.771449 containerd[2102]: time="2025-09-12T17:36:53.771402359Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.775440 containerd[2102]: time="2025-09-12T17:36:53.775391922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.777573 containerd[2102]: time="2025-09-12T17:36:53.777099937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.587868178s" Sep 12 17:36:53.777573 containerd[2102]: time="2025-09-12T17:36:53.777160907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:36:53.795588 containerd[2102]: time="2025-09-12T17:36:53.795534859Z" level=info msg="CreateContainer within sandbox \"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:36:53.820344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218370945.mount: Deactivated successfully. Sep 12 17:36:53.821161 containerd[2102]: time="2025-09-12T17:36:53.820673892Z" level=info msg="CreateContainer within sandbox \"eed93982da8c092d7e4cf2abdc0b2d21433e5b07d764af2901bb09d426c7464a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1cbd5fff08cb36237b5ac0099d434316183f20a126982279f6c3a5ff6aa55d4a\"" Sep 12 17:36:53.822375 containerd[2102]: time="2025-09-12T17:36:53.822133641Z" level=info msg="StartContainer for \"1cbd5fff08cb36237b5ac0099d434316183f20a126982279f6c3a5ff6aa55d4a\"" Sep 12 17:36:53.891755 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:53.888803 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:53.888849 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:53.976117 containerd[2102]: time="2025-09-12T17:36:53.975275590Z" level=info msg="StartContainer for \"1cbd5fff08cb36237b5ac0099d434316183f20a126982279f6c3a5ff6aa55d4a\" returns successfully" Sep 12 17:36:54.126173 kubelet[3633]: I0912 17:36:54.124669 3633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c8fhr" podStartSLOduration=27.260610801 podStartE2EDuration="50.12463964s" podCreationTimestamp="2025-09-12 17:36:04 +0000 UTC" firstStartedPulling="2025-09-12 17:36:30.91429745 +0000 UTC m=+49.920417062" lastFinishedPulling="2025-09-12 17:36:53.778326291 +0000 UTC m=+72.784445901" observedRunningTime="2025-09-12 17:36:54.073862176 +0000 UTC m=+73.079981805" watchObservedRunningTime="2025-09-12 17:36:54.12463964 +0000 UTC m=+73.130759269" Sep 12 17:36:54.768658 sshd[6681]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:54.778777 systemd[1]: sshd@11-172.31.16.204:22-147.75.109.163:41310.service: Deactivated successfully. Sep 12 17:36:54.782538 systemd-logind[2070]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:36:54.783433 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:36:54.786063 systemd-logind[2070]: Removed session 12. Sep 12 17:36:54.795603 kubelet[3633]: I0912 17:36:54.785561 3633 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:36:54.803837 systemd[1]: Started sshd@12-172.31.16.204:22-147.75.109.163:41318.service - OpenSSH per-connection server daemon (147.75.109.163:41318). Sep 12 17:36:54.812057 kubelet[3633]: I0912 17:36:54.811896 3633 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:36:54.976644 sshd[6755]: Accepted publickey for core from 147.75.109.163 port 41318 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:54.978323 sshd[6755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:54.987599 systemd-logind[2070]: New session 13 of user core. Sep 12 17:36:54.991554 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:36:55.522414 sshd[6755]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:55.541604 systemd[1]: sshd@12-172.31.16.204:22-147.75.109.163:41318.service: Deactivated successfully. Sep 12 17:36:55.558319 systemd-logind[2070]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:36:55.572159 systemd[1]: Started sshd@13-172.31.16.204:22-147.75.109.163:41330.service - OpenSSH per-connection server daemon (147.75.109.163:41330). Sep 12 17:36:55.572656 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:36:55.577457 systemd-logind[2070]: Removed session 13. Sep 12 17:36:55.791205 sshd[6787]: Accepted publickey for core from 147.75.109.163 port 41330 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:55.798437 sshd[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:55.811160 systemd-logind[2070]: New session 14 of user core. Sep 12 17:36:55.818167 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:36:55.939160 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:36:55.938756 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:36:55.938766 systemd-resolved[1979]: Flushed all caches. Sep 12 17:36:56.192409 sshd[6787]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:56.199580 systemd[1]: sshd@13-172.31.16.204:22-147.75.109.163:41330.service: Deactivated successfully. Sep 12 17:36:56.201707 systemd-logind[2070]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:36:56.207326 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:36:56.209095 systemd-logind[2070]: Removed session 14. Sep 12 17:37:01.233519 systemd[1]: Started sshd@14-172.31.16.204:22-147.75.109.163:37782.service - OpenSSH per-connection server daemon (147.75.109.163:37782). Sep 12 17:37:01.925195 sshd[6806]: Accepted publickey for core from 147.75.109.163 port 37782 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:02.001534 sshd[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:02.166833 systemd-logind[2070]: New session 15 of user core. Sep 12 17:37:02.181416 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:37:03.560993 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:03.555613 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:03.555647 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:03.578145 sshd[6806]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:03.608817 systemd[1]: sshd@14-172.31.16.204:22-147.75.109.163:37782.service: Deactivated successfully. Sep 12 17:37:03.624887 systemd-logind[2070]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:37:03.672522 systemd[1]: Started sshd@15-172.31.16.204:22-147.75.109.163:37794.service - OpenSSH per-connection server daemon (147.75.109.163:37794). Sep 12 17:37:03.673025 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:37:03.687550 systemd-logind[2070]: Removed session 15. Sep 12 17:37:03.910031 sshd[6823]: Accepted publickey for core from 147.75.109.163 port 37794 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:03.912524 sshd[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:03.919814 systemd-logind[2070]: New session 16 of user core. Sep 12 17:37:03.924156 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:37:04.801914 sshd[6823]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:04.812129 systemd[1]: sshd@15-172.31.16.204:22-147.75.109.163:37794.service: Deactivated successfully. Sep 12 17:37:04.818110 systemd-logind[2070]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:37:04.827002 systemd[1]: Started sshd@16-172.31.16.204:22-147.75.109.163:37800.service - OpenSSH per-connection server daemon (147.75.109.163:37800). Sep 12 17:37:04.828896 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:37:04.834205 systemd-logind[2070]: Removed session 16. Sep 12 17:37:05.048904 sshd[6835]: Accepted publickey for core from 147.75.109.163 port 37800 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:05.052288 sshd[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:05.066314 systemd-logind[2070]: New session 17 of user core. Sep 12 17:37:05.073411 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:37:07.523996 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:07.523448 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:07.523509 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:09.041685 sshd[6835]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:09.143176 systemd[1]: Started sshd@17-172.31.16.204:22-147.75.109.163:37814.service - OpenSSH per-connection server daemon (147.75.109.163:37814). Sep 12 17:37:09.143890 systemd[1]: sshd@16-172.31.16.204:22-147.75.109.163:37800.service: Deactivated successfully. Sep 12 17:37:09.167996 systemd-logind[2070]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:37:09.169681 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:37:09.175014 systemd-logind[2070]: Removed session 17. Sep 12 17:37:09.422864 sshd[6863]: Accepted publickey for core from 147.75.109.163 port 37814 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:09.425776 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:09.432605 systemd-logind[2070]: New session 18 of user core. Sep 12 17:37:09.441108 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:37:09.573090 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:09.569817 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:09.574406 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:10.632670 sshd[6863]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:10.638169 systemd-logind[2070]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:37:10.640962 systemd[1]: sshd@17-172.31.16.204:22-147.75.109.163:37814.service: Deactivated successfully. Sep 12 17:37:10.647223 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:37:10.648543 systemd-logind[2070]: Removed session 18. Sep 12 17:37:10.673150 systemd[1]: Started sshd@18-172.31.16.204:22-147.75.109.163:42226.service - OpenSSH per-connection server daemon (147.75.109.163:42226). Sep 12 17:37:10.876658 sshd[6880]: Accepted publickey for core from 147.75.109.163 port 42226 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:10.879093 sshd[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:10.884594 systemd-logind[2070]: New session 19 of user core. Sep 12 17:37:10.893269 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:37:11.252278 sshd[6880]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:11.257362 systemd[1]: sshd@18-172.31.16.204:22-147.75.109.163:42226.service: Deactivated successfully. Sep 12 17:37:11.263272 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:37:11.263762 systemd-logind[2070]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:37:11.266154 systemd-logind[2070]: Removed session 19. Sep 12 17:37:15.520822 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:15.542466 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:15.520860 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:16.340384 systemd[1]: Started sshd@19-172.31.16.204:22-147.75.109.163:42230.service - OpenSSH per-connection server daemon (147.75.109.163:42230). Sep 12 17:37:16.784342 sshd[6971]: Accepted publickey for core from 147.75.109.163 port 42230 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:16.792056 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:16.815008 systemd-logind[2070]: New session 20 of user core. Sep 12 17:37:16.819108 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:37:17.568965 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:17.571169 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:17.568975 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:17.877989 sshd[6971]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:17.895934 systemd[1]: sshd@19-172.31.16.204:22-147.75.109.163:42230.service: Deactivated successfully. Sep 12 17:37:17.902122 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:37:17.902139 systemd-logind[2070]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:37:17.904315 systemd-logind[2070]: Removed session 20. Sep 12 17:37:19.625827 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:19.618122 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:19.618183 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:21.665012 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:21.713708 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:21.665038 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:22.362658 kubelet[3633]: E0912 17:37:22.357170 3633 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.003s" Sep 12 17:37:22.747330 update_engine[2073]: I20250912 17:37:22.747174 2073 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:37:22.747330 update_engine[2073]: I20250912 17:37:22.747249 2073 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:37:22.753740 update_engine[2073]: I20250912 17:37:22.752861 2073 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:37:22.759266 update_engine[2073]: I20250912 17:37:22.758445 2073 omaha_request_params.cc:62] Current group set to lts Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.766829 2073 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.766875 2073 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.766911 2073 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.766974 2073 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.767089 2073 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.767102 2073 omaha_request_action.cc:272] Request: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: Sep 12 17:37:22.768277 update_engine[2073]: I20250912 17:37:22.767112 2073 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:22.837645 locksmithd[2127]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:37:22.864478 update_engine[2073]: I20250912 17:37:22.863945 2073 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:22.869880 update_engine[2073]: I20250912 17:37:22.864374 2073 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:22.902133 update_engine[2073]: E20250912 17:37:22.901959 2073 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:22.902133 update_engine[2073]: I20250912 17:37:22.902091 2073 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:37:22.945612 systemd[1]: Started sshd@20-172.31.16.204:22-147.75.109.163:57138.service - OpenSSH per-connection server daemon (147.75.109.163:57138). Sep 12 17:37:23.450651 sshd[7011]: Accepted publickey for core from 147.75.109.163 port 57138 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:23.457187 sshd[7011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:23.502302 systemd-logind[2070]: New session 21 of user core. Sep 12 17:37:23.505547 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:37:23.714865 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:23.731859 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:23.714877 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:25.041829 sshd[7011]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:25.057824 systemd-logind[2070]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:37:25.059493 systemd[1]: sshd@20-172.31.16.204:22-147.75.109.163:57138.service: Deactivated successfully. Sep 12 17:37:25.073504 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:37:25.076818 systemd-logind[2070]: Removed session 21. Sep 12 17:37:30.077084 systemd[1]: Started sshd@21-172.31.16.204:22-147.75.109.163:53590.service - OpenSSH per-connection server daemon (147.75.109.163:53590). Sep 12 17:37:30.376527 sshd[7026]: Accepted publickey for core from 147.75.109.163 port 53590 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:30.380595 sshd[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:30.392953 systemd-logind[2070]: New session 22 of user core. Sep 12 17:37:30.399614 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:37:31.502371 sshd[7026]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:31.512468 systemd[1]: sshd@21-172.31.16.204:22-147.75.109.163:53590.service: Deactivated successfully. Sep 12 17:37:31.530897 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:37:31.532541 systemd-logind[2070]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:37:31.534403 systemd-logind[2070]: Removed session 22. Sep 12 17:37:33.610792 update_engine[2073]: I20250912 17:37:33.608327 2073 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:33.615357 update_engine[2073]: I20250912 17:37:33.615315 2073 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:33.615631 update_engine[2073]: I20250912 17:37:33.615593 2073 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:33.616031 update_engine[2073]: E20250912 17:37:33.615991 2073 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:33.616138 update_engine[2073]: I20250912 17:37:33.616068 2073 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:37:36.548199 systemd[1]: Started sshd@22-172.31.16.204:22-147.75.109.163:53594.service - OpenSSH per-connection server daemon (147.75.109.163:53594). Sep 12 17:37:36.843436 sshd[7040]: Accepted publickey for core from 147.75.109.163 port 53594 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:36.848029 sshd[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:36.855617 systemd-logind[2070]: New session 23 of user core. Sep 12 17:37:36.867171 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:37:37.541856 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:37.537819 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:37.537849 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:37.880818 sshd[7040]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:37.887403 systemd-logind[2070]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:37:37.888542 systemd[1]: sshd@22-172.31.16.204:22-147.75.109.163:53594.service: Deactivated successfully. Sep 12 17:37:37.900009 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:37:37.911326 systemd-logind[2070]: Removed session 23. Sep 12 17:37:41.506375 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:41.507135 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:41.506417 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:42.921116 systemd[1]: Started sshd@23-172.31.16.204:22-147.75.109.163:50168.service - OpenSSH per-connection server daemon (147.75.109.163:50168). Sep 12 17:37:43.238999 sshd[7056]: Accepted publickey for core from 147.75.109.163 port 50168 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:43.244231 sshd[7056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:43.255309 systemd-logind[2070]: New session 24 of user core. Sep 12 17:37:43.262523 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:37:43.556869 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:43.552822 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:43.552831 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:43.606850 update_engine[2073]: I20250912 17:37:43.606703 2073 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:43.606850 update_engine[2073]: I20250912 17:37:43.607045 2073 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:43.606850 update_engine[2073]: I20250912 17:37:43.607324 2073 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:43.606850 update_engine[2073]: E20250912 17:37:43.608698 2073 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:43.606850 update_engine[2073]: I20250912 17:37:43.608780 2073 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:37:44.971234 sshd[7056]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:44.980738 systemd-logind[2070]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:37:44.982551 systemd[1]: sshd@23-172.31.16.204:22-147.75.109.163:50168.service: Deactivated successfully. Sep 12 17:37:44.999198 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:37:45.002582 systemd-logind[2070]: Removed session 24. Sep 12 17:37:45.600984 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:45.604467 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:45.601013 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:47.649865 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:47.651913 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:47.651931 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:49.376608 systemd[1]: run-containerd-runc-k8s.io-ccce3726f7c259e9b3cb1cb301f3924cf8a7b597ed4be46294a80c0fac1f0807-runc.TPu3GC.mount: Deactivated successfully. Sep 12 17:37:49.985203 systemd[1]: run-containerd-runc-k8s.io-05bd44a016d54c77e0170d00d23323f3dbf9466568847faff92f2d7e62c1044c-runc.8wKIwA.mount: Deactivated successfully. Sep 12 17:37:51.490193 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:51.494809 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:51.490201 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:53.536828 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:53.539565 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:53.536837 systemd-resolved[1979]: Flushed all caches. Sep 12 17:37:53.629293 update_engine[2073]: I20250912 17:37:53.627959 2073 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:53.649018 update_engine[2073]: I20250912 17:37:53.648970 2073 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:53.649324 update_engine[2073]: I20250912 17:37:53.649289 2073 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:53.649806 update_engine[2073]: E20250912 17:37:53.649765 2073 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:53.649917 update_engine[2073]: I20250912 17:37:53.649847 2073 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:37:53.659750 update_engine[2073]: I20250912 17:37:53.658757 2073 omaha_request_action.cc:617] Omaha request response: Sep 12 17:37:53.660433 update_engine[2073]: E20250912 17:37:53.660316 2073 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 17:37:53.756833 update_engine[2073]: I20250912 17:37:53.756768 2073 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 17:37:53.760689 update_engine[2073]: I20250912 17:37:53.756937 2073 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:37:53.760689 update_engine[2073]: I20250912 17:37:53.756953 2073 update_attempter.cc:306] Processing Done. Sep 12 17:37:53.760689 update_engine[2073]: E20250912 17:37:53.756981 2073 update_attempter.cc:619] Update failed. Sep 12 17:37:53.760689 update_engine[2073]: I20250912 17:37:53.756989 2073 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 17:37:53.760689 update_engine[2073]: I20250912 17:37:53.756997 2073 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 17:37:53.760689 update_engine[2073]: I20250912 17:37:53.757005 2073 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 17:37:53.762152 update_engine[2073]: I20250912 17:37:53.761948 2073 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:37:53.762152 update_engine[2073]: I20250912 17:37:53.762027 2073 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:37:53.762152 update_engine[2073]: I20250912 17:37:53.762038 2073 omaha_request_action.cc:272] Request: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: Sep 12 17:37:53.762152 update_engine[2073]: I20250912 17:37:53.762048 2073 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.762566 2073 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.780045 2073 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:53.781658 update_engine[2073]: E20250912 17:37:53.780952 2073 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781041 2073 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781052 2073 omaha_request_action.cc:617] Omaha request response: Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781064 2073 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781071 2073 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781078 2073 update_attempter.cc:306] Processing Done. Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781088 2073 update_attempter.cc:310] Error event sent. Sep 12 17:37:53.781658 update_engine[2073]: I20250912 17:37:53.781109 2073 update_check_scheduler.cc:74] Next update check in 47m29s Sep 12 17:37:53.801145 locksmithd[2127]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 17:37:53.803150 locksmithd[2127]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 17:37:55.587491 systemd-journald[1571]: Under memory pressure, flushing caches. Sep 12 17:37:55.584968 systemd-resolved[1979]: Under memory pressure, flushing caches. Sep 12 17:37:55.584995 systemd-resolved[1979]: Flushed all caches. Sep 12 17:38:08.614068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0-rootfs.mount: Deactivated successfully. Sep 12 17:38:08.664205 containerd[2102]: time="2025-09-12T17:38:08.647968466Z" level=info msg="shim disconnected" id=01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0 namespace=k8s.io Sep 12 17:38:08.664205 containerd[2102]: time="2025-09-12T17:38:08.664205780Z" level=warning msg="cleaning up after shim disconnected" id=01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0 namespace=k8s.io Sep 12 17:38:08.665903 containerd[2102]: time="2025-09-12T17:38:08.664226860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:08.864334 containerd[2102]: time="2025-09-12T17:38:08.863913310Z" level=info msg="shim disconnected" id=e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887 namespace=k8s.io Sep 12 17:38:08.864334 containerd[2102]: time="2025-09-12T17:38:08.864042677Z" level=warning msg="cleaning up after shim disconnected" id=e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887 namespace=k8s.io Sep 12 17:38:08.864334 containerd[2102]: time="2025-09-12T17:38:08.864056820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:08.876444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887-rootfs.mount: Deactivated successfully. Sep 12 17:38:09.166560 kubelet[3633]: I0912 17:38:09.154574 3633 scope.go:117] "RemoveContainer" containerID="01b94358c74ec5f1f315a4e9fbe7843686aa1efc6b56c1256daf16f6d24a84c0" Sep 12 17:38:09.173988 kubelet[3633]: I0912 17:38:09.166761 3633 scope.go:117] "RemoveContainer" containerID="e63361cce2d262e91c8adbf1b153c7bf874057eea4a90ea8f01e1631a1045887" Sep 12 17:38:09.266738 containerd[2102]: time="2025-09-12T17:38:09.265707268Z" level=info msg="CreateContainer within sandbox \"27125e2f430aa343f1da16287a914ff849796a7797b4f15cf7c1c3c644bb2af7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:38:09.267950 containerd[2102]: time="2025-09-12T17:38:09.266940352Z" level=info msg="CreateContainer within sandbox \"930cbdc68e65edb07b2d665ae7bcc965a7ac3844977ba553c680f55adb352f34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:38:09.406914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536225017.mount: Deactivated successfully. Sep 12 17:38:09.416064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578949910.mount: Deactivated successfully. Sep 12 17:38:09.426447 containerd[2102]: time="2025-09-12T17:38:09.425955744Z" level=info msg="CreateContainer within sandbox \"27125e2f430aa343f1da16287a914ff849796a7797b4f15cf7c1c3c644bb2af7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"45c1066c016b45e28a1aa04aba432767f0354f86b851e3475896d3b66ba782d3\"" Sep 12 17:38:09.427740 containerd[2102]: time="2025-09-12T17:38:09.426857701Z" level=info msg="StartContainer for \"45c1066c016b45e28a1aa04aba432767f0354f86b851e3475896d3b66ba782d3\"" Sep 12 17:38:09.428504 containerd[2102]: time="2025-09-12T17:38:09.428442171Z" level=info msg="CreateContainer within sandbox \"930cbdc68e65edb07b2d665ae7bcc965a7ac3844977ba553c680f55adb352f34\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7c5d1c313e91c22fda43078903dec2a707347465112a09d21856a24bee2ae5aa\"" Sep 12 17:38:09.429014 containerd[2102]: time="2025-09-12T17:38:09.428986751Z" level=info msg="StartContainer for \"7c5d1c313e91c22fda43078903dec2a707347465112a09d21856a24bee2ae5aa\"" Sep 12 17:38:09.574777 containerd[2102]: time="2025-09-12T17:38:09.574248741Z" level=info msg="StartContainer for \"7c5d1c313e91c22fda43078903dec2a707347465112a09d21856a24bee2ae5aa\" returns successfully" Sep 12 17:38:09.574777 containerd[2102]: time="2025-09-12T17:38:09.574356988Z" level=info msg="StartContainer for \"45c1066c016b45e28a1aa04aba432767f0354f86b851e3475896d3b66ba782d3\" returns successfully" Sep 12 17:38:09.591785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721577186.mount: Deactivated successfully. Sep 12 17:38:13.778987 containerd[2102]: time="2025-09-12T17:38:13.778589431Z" level=info msg="shim disconnected" id=cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2 namespace=k8s.io Sep 12 17:38:13.778987 containerd[2102]: time="2025-09-12T17:38:13.778661964Z" level=warning msg="cleaning up after shim disconnected" id=cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2 namespace=k8s.io Sep 12 17:38:13.778987 containerd[2102]: time="2025-09-12T17:38:13.778678772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:13.783706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2-rootfs.mount: Deactivated successfully. Sep 12 17:38:14.126200 kubelet[3633]: I0912 17:38:14.126079 3633 scope.go:117] "RemoveContainer" containerID="cb51e5ab5a3b36efb82395de9c269c2c4a785bf9ad1e1788bdd0979f07720ee2" Sep 12 17:38:14.135464 containerd[2102]: time="2025-09-12T17:38:14.135402304Z" level=info msg="CreateContainer within sandbox \"6c0f9ef6022008986fb09a85e0971cc79707f9de3f4cfaff8eb96f4cca4044ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:38:14.195034 containerd[2102]: time="2025-09-12T17:38:14.194897465Z" level=info msg="CreateContainer within sandbox \"6c0f9ef6022008986fb09a85e0971cc79707f9de3f4cfaff8eb96f4cca4044ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"96d1dc757a49afbd4993b23586ca338277ede83e504f9d16d43381b74864ab67\"" Sep 12 17:38:14.195604 containerd[2102]: time="2025-09-12T17:38:14.195550521Z" level=info msg="StartContainer for \"96d1dc757a49afbd4993b23586ca338277ede83e504f9d16d43381b74864ab67\"" Sep 12 17:38:14.302773 containerd[2102]: time="2025-09-12T17:38:14.302706841Z" level=info msg="StartContainer for \"96d1dc757a49afbd4993b23586ca338277ede83e504f9d16d43381b74864ab67\" returns successfully" Sep 12 17:38:14.790324 systemd[1]: run-containerd-runc-k8s.io-96d1dc757a49afbd4993b23586ca338277ede83e504f9d16d43381b74864ab67-runc.LnXpO9.mount: Deactivated successfully. Sep 12 17:38:14.992742 kubelet[3633]: E0912 17:38:14.992571 3633 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:38:15.466102 systemd[1]: run-containerd-runc-k8s.io-f19e6d04f7fc8a58d6f31f5f199d3b0bda0380e1a6c190678e5a11eb57845fc1-runc.egR581.mount: Deactivated successfully.